manilx yes, that's it
we should change the "full backup interval" to "base/complete backup interval", to clarify what is a full . And we will do it.
manilx yes, that's it
we should change the "full backup interval" to "base/complete backup interval", to clarify what is a full . And we will do it.
we are reworking the import code , along the new backup code ( https://xcp-ng.org/forum/topic/10664/our-future-backup-code-test-it ) to unify the differetn disk transformation path
For now:
andrewreid Yes, I can't wait to share it with everybody.
flakpyro today the backup code use binary stream in a the vhd format. This format is limited, by design , to 2TB disks
xcp-ng team introduce the qcow2 format to handle bigger disk
By using a independant format, we'll be able to handle both vhd and qcow2 on the backup side without multiplying complexity. We'll also be able to build the adapter to handle the various vmdk sub format (rax, cowd, sesparse and stream optimized) used by v2v and import bigger disks directly
@b-dietrich said in Externalised backup LTO:
Hi everyone,
I would like to know if it's possible to externalised backup on library tape with XOA ?
Is it in the roadmap for 2024 ?
I will let olivierlambert on the backlog point. It is still planned, but there is a lot of ground work before :
That being said, the mirror backup feature as been built to pave the way to tape backup
For now the easiest way to do tape backup is to use full backup to a backup repository only used for this, and to mirror it to tapes. At our scale, priorities can also change if there is a big enough sponsor, that is ready to take a part of the financial load of this feature and gives us access to real world hardware and processes.
the test with dumarjo showed that there is still a bug during the import. I am still investigating it and will keep you informed, hopefully today or tomorrow
ismo-conguairta said in VMware migration tool: we need your feedback!:
I have two different behaviour on two different XO instances. Each XO instance refers to a different pool (different hosts, same xcp-ng version). In both the instances I try to connect to the same Private Virtual Datacenter based on VMware/vSphere at OVH.
In the first one I get the following error message by using the web UI: "invalid parameters" (take a look at this logfile 2023-02-28T19_25_21.933Z - XO.txt )
In the second one, I get the following error message by using the web UI "404 Not Found https://<vsphere-ip>/folder/<vm-name>/<vm-name>.vmx?dsName=<datastore-name>"
By using the xo-cli I get the "404 Not Found" on both the instances.
Regarding the "404 Not Found", I want to point out that at OVH I have a VMware datacenter (with 2 hosts) and in order to access to the storage I need to specify the parameter
dcPath=<datacenter-name>
So the right URL should be https://<vsphere-ip>/folder/<vm-name>/<vm-name>.vmx?dcPath=<datacenter-name>&dsName=<datastore-name>
Simply adding (in a static way) the dcPath specification on line :54 of esxi.mjs file makes it work.
I thought it was constant. I will look into the api to get it, and if not possible expose it in the UI
Seclusion : noted I will look into theis error message, this one is a first for me
brezlord mac address and uefi should works now
rochemike patch done this morning
olivierlambert nice catch
It's following a dependencies update, removing an old one ( node-fetch ) . The fix should be merged this morning ( with the ability to resume an import ) https://github.com/vatesfr/xen-orchestra/pull/8440
so that is probably only a off by one error in the task code
Thanks andrew
Andrew nice catch andrew I will look into it
is it keeping disk attached to dom0 ? (in dashboard -> health )
McHenry xo-cli backupNg.getLogs --json limit='json:500' should work ( the command line parameter are considered as string )
vkeven we don't have ( for now) the feature to create bucket directly from XO. Also I think it is more secure if XO don't know at all the credits of the bucket admin
katapaltes hi, I do think we didnt handle this case.
To be fair, the sheer breadth of the capabilities of vmware is always impressive
Would you be able to show us the VM metadata (especially the .vmx and .vmsd ) to see how we can detect them ? We'll probably won't be able to read this snapshot data for now, so we'll have to at least document a reliable process
On the fast clone side : did you try the checkbox on the bottom (fast clone) ? It should be pretty fast since no data are copied.
On the native snapshot side : I now it's on our roadmap, bu can't give you ETA
vmpr there is a high chance that we improve the scheduler , so maybe be able to plan more precisely when the full occurs, but we will always keep one full backup as the beginning of the chain
You can mitigate the risk on infinite schedule by using health check : restoring the backup automatically after the job, this ensure that , at the time of backup, the files are correct, and that if an issue occurs in the backup you can start a new chain, thus detecting a backup storage issue before needed it after having a production issue
Tristis Oris thanks , I missed a file
I pushed it just now
Tristis Oris that is already a good news.
I pushed an additional fix : the NBD info was not shown on the UI
Tristis Oris no it's on our end
Could you retry nbd + target a block based directory ?
ON my test setup, with the latest changes I get better speed than master ( 190MB/s per disk vs 130-170 depending on the run and settings on master)
I got quite a huge variation between the same runs (40MB/s)