@manilx yes, that's it
we should change the "full backup interval" to "base/complete backup interval", to clarify what is a full . And we will do it.
@manilx yes, that's it
we should change the "full backup interval" to "base/complete backup interval", to clarify what is a full . And we will do it.
we are reworking the import code , along the new backup code ( https://xcp-ng.org/forum/topic/10664/our-future-backup-code-test-it ) to unify the differetn disk transformation path
For now:
@andrewreid Yes, I can't wait to share it with everybody.
@flakpyro today the backup code use binary stream in a the vhd format. This format is limited, by design , to 2TB disks
xcp-ng team introduce the qcow2 format to handle bigger disk
By using a independant format, we'll be able to handle both vhd and qcow2 on the backup side without multiplying complexity. We'll also be able to build the adapter to handle the various vmdk sub format (rax, cowd, sesparse and stream optimized) used by v2v and import bigger disks directly
@b-dietrich said in Externalised backup LTO:
Hi everyone,
I would like to know if it's possible to externalised backup on library tape with XOA ?
Is it in the roadmap for 2024 ?
I will let @olivierlambert on the backlog point. It is still planned, but there is a lot of ground work before :
That being said, the mirror backup feature as been built to pave the way to tape backup
For now the easiest way to do tape backup is to use full backup to a backup repository only used for this, and to mirror it to tapes. At our scale, priorities can also change if there is a big enough sponsor, that is ready to take a part of the financial load of this feature and gives us access to real world hardware and processes.
the test with @dumarjo showed that there is still a bug during the import. I am still investigating it and will keep you informed, hopefully today or tomorrow
@ismo-conguairta said in VMware migration tool: we need your feedback!:
I have two different behaviour on two different XO instances. Each XO instance refers to a different pool (different hosts, same xcp-ng version). In both the instances I try to connect to the same Private Virtual Datacenter based on VMware/vSphere at OVH.
In the first one I get the following error message by using the web UI: "invalid parameters" (take a look at this logfile 2023-02-28T19_25_21.933Z - XO.txt )
In the second one, I get the following error message by using the web UI "404 Not Found https://<vsphere-ip>/folder/<vm-name>/<vm-name>.vmx?dsName=<datastore-name>"
By using the xo-cli I get the "404 Not Found" on both the instances.
Regarding the "404 Not Found", I want to point out that at OVH I have a VMware datacenter (with 2 hosts) and in order to access to the storage I need to specify the parameter
dcPath=<datacenter-name>
So the right URL should be https://<vsphere-ip>/folder/<vm-name>/<vm-name>.vmx?dcPath=<datacenter-name>&dsName=<datastore-name>
Simply adding (in a static way) the dcPath specification on line :54 of esxi.mjs file makes it work.
I thought it was constant. I will look into the api to get it, and if not possible expose it in the UI
@Seclusion : noted I will look into theis error message, this one is a first for me
@brezlord mac address and uefi should works now
@rochemike patch done this morning
@Tristis-Oris thanks , I missed a file
I pushed it just now
@Tristis-Oris that is already a good news.
I pushed an additional fix : the NBD info was not shown on the UI
@Tristis-Oris no it's on our end
Could you retry nbd + target a block based directory ?
ON my test setup, with the latest changes I get better speed than master ( 190MB/s per disk vs 130-170 depending on the run and settings on master)
I got quite a huge variation between the same runs (40MB/s)
@vmpr the oldest one is always a full
a delta is the difference between this backup and the previous, thus we need
You could do infinite delta, without making full, but this comes with a risk : if anything goes wrong ( on backup or storage) , you'll lose the full backup chain
Would it be possible to store a shorter chain here , and use a mirror backup to do longer retention on a cheaper storage ?
@Tristis-Oris I made a little change, can you update (like the last time ) and retest ?
@Tristis-Oris Do you have the same performance without NBD ?
Does your storage use blocks ?
@Tristis-Oris ouch that is quite costly
Can you describe which backup you run ?
Can you check if the duration is different (maybe this is a measurement error and not a slow speed) ?
ih @Tristis-Oris and @Davidj-0 I pushed an update to better handle NBD error, please keep us informed
thanks for your tests . I will recheck NBD backup
@vkeven you can request a meeting with our sales teams, but technical discussion are better here if possible