you can uncheck the "Merge backups synchronously" in the advanced view of the backup job to have it run inside the backup job, thus making it visible
(we are reworking the internal design for better observability, but there is a lot of job before)
you can uncheck the "Merge backups synchronously" in the advanced view of the backup job to have it run inside the backup job, thus making it visible
(we are reworking the internal design for better observability, but there is a lot of job before)
@dnordmann the tunnel is closed. By the way the patch will be release tomorrow on latest, by the end of december on stable
thank you all for your patience and your help identifying the root cause of this bug
Cancelling a backup run is also on our roadmap ,as soon as we have done reworking the tasks to use xo-task on the full chain, and not an hybrid of various iteration of the tasks objects
better tracking of the backup run is also on our roadmap , this is also linked to the task changes plus are some issues with the size on the remote before being able to do this
In any case, both are on our backlog
@dnordmann said in V2V - Stops at 99%:
Ticket#7747444
and I just opened another ticket for the other client that is having the same issue. Ticket#7748053.
Support tunnels should be open for both clients.
Thanks!
I deployed the patch on the new client, if it's ok I will do the second one after
@dnordmann if I remember well you have opened a ticket on your xoa , can provide us the ticket number and we can patch your xoa ?
if you are using xo from source, you need to change branch and then restart xo-server
@MajorP93 that's nice to hear taht it, at least solved the issue
are you using a xoa ? or a compiled fro source ?
what is the user that run the xo service ?
@dnordmann @tsukraw
thank you for your patience, we found something while working with the xcp storage team, and there is an issue with the last block size
Can you confirm that the failing VM has at least one disk with a disk with a size not aligned with 2MB ?
Could you test this PR on the failing import ? https://github.com/vatesfr/xen-orchestra/pull/9233
regards
@tsukraw hard to answer as is
do you have anything in your xo log ( console / journalctl ) or in the task log ?
If not, the log of xo should say something about " nbdkit logs of ${diskPath} are in /tmp/xo-serverxxxx"
can you check if there is something at the end ?
1.5TB is ok in vhd
@tsukraw we will have to improve the doc
First migration :
[vm source] => [vm on xcp]
Check
[vm source] => [vm on xcp] => [copy/clone of vm on xcp]
test on [copy/clone of vm on xcp] <--
remove [copy/clone of vm on xcp]
Final migration
stop [vm source]
launch the same migraiton. The data already transfered won't be retransfered
applues the fixes you had to do during previous test
start [vm on xcp]
@tsukraw yes it is already possible , and there is even a documentation : https://docs.xen-orchestra.com/v2v-migration-guide#-final-migration
works the same through xo-cli ,or through the UI. You'll need to keep the same storage/template for both migraiton, don't create new snapshot on the vmware side, and it can only works if warm migration works
@stevewest15 these errors are normal on XO from source, since you don't have licenses
the configuration to use for ipmi sensors depends on the bios strings .
Could you post the bios string of your hosts ? Dell configuration is used if system-manufacturer' contains the string dell
@AlexD2006 thanks for signaling this, we just merged a fix, that fix it on our labs
can you test it on your side ?
https://github.com/vatesfr/xen-orchestra/pull/9202
@probain thanks for signaling this, we just merged a fix, that fix it on our labs
can you test it on your side ?
https://github.com/vatesfr/xen-orchestra/pull/9202
@AlexD2006 this looks like the right issue, we will look into it immediately ( yesterday was a public holiday in France)
@MajorP93 the config should be in ~/.config/xo-server/ of the user running xo-server
It is noted
@Pilow said in Long backup times via NFS to Data Domain from Xen Orchestra:
@florent what if we use XO Proxies ?
te the conf should be on the proxy is /etc/xo-proxy/
@MajorP93 this settings exists (not in the ui )
you can create a configuration file named /etc/xo-server/config.diskConcurrency.toml if you use a xoa
containing
[backups]
diskPerVmConcurrency = 2
@MajorP93
interesting, Note that this is orthogonal to NBD.
I note that there is probably more work to do to improve the performance and will retest VM with a lot of disk
Performance is really depending on the underlying storage.
compression and encryption can't be done in "legacy mode" , since we won't be able to merge block in place in this case.
@acebmxer is 10.120.20.119 your xo- proxy ?
I misread your question, you want to exclude the xoa from the process, not the proxy. Today the disk conversion can only be done directly by the xoa.
The data flow is always {vpshere/esxi} => xoa => xcp-ng , with a proxy that can be between xoa and xcp-ng
The only way to handle this without goiing through delilah would be to deploy a second xoa/xoce directly in tilton pool . You can have multiple xoa connected to the same pool.