@manilx yes, that's it
we should change the "full backup interval" to "base/complete backup interval", to clarify what is a full . And we will do it.
@manilx yes, that's it
we should change the "full backup interval" to "base/complete backup interval", to clarify what is a full . And we will do it.
we are reworking the import code , along the new backup code ( https://xcp-ng.org/forum/topic/10664/our-future-backup-code-test-it ) to unify the differetn disk transformation path
For now:
@andrewreid Yes, I can't wait to share it with everybody.
found it , there was not one, but 2 bugs
albeit small they were putting the xo backup in a incorrect state
the fix is here, and will be merged as soon as possible and then released as a patch for xoa users
https://github.com/vatesfr/xen-orchestra/pull/8780
really thank you everyone for the information
@RobWhalley saved the day
We are still working on it to better understand why it fixes and improve the situation
@RobWhalley nice work, I am testing it right now
@olivierlambert Nice catch
CBT will only be enabled when using purge data , I will fix the first one
(this is because CBT does not improve things without purge snaphot data , but has some edge case , so at least we won't enable it for nothing )
@olivierlambert @Gheppy that is a nice catch and it gives us a interesting clue
I am currently working on it
@manilx said in Our future backup code: test it!:
@florent Yes, just fine.
ok, so that's an issue with the mirror. I reproduced this night on my lab
I am working on a fix
@farokh the VM (from source or target) are not started automatically
After the migration, the VM from on the xcp-ng host shoud be ready to start, with snapshot corresponding to the steps of the replication
this is not automatic because, for now, we put a lot of work on the disk data transfer, but there is much to do around it to ensure the VM start exactly as intended : guest tools, advanced network or storage configuration.
@farokh do you have at least one snapshot on the source ? The data migrated while the VM is running are the data before the last snapshot
The power off is effectively a call to powerOff, is there a better alternative (that can work with or without the vmware tools installed) ?
the progress bar are visible in the task view, or on the disk tab on the VM being imported ( not yet in the form , this will probably wait for the XO6 version of this page )
@farokh If my understanding it correct : it migrated the source VM correctly , without breaking anything on esxi side, but it would be better if the V2V gives more feedback to the user ?
Also, you would like to be able to select different networks as target ?
since the first testers reduced the list of supported version we added tools to compile the right version (at least on debian 12 13 )
can you retry and tell us if it's ok ?
@olivierlambert we have a WIP for S3 to improve the listting performance . It will take a few days before having a testable fix
@cairoti you should be able to add it as a remote ( not as a storage repository on xcp-ng)
@cairoti you know you can backup directly to S3, it will be more efficient that way ( we rework S3 code 3 years ago , because of limitation of S3)
you should type a 32 char hexadecimal key ( 0 to 9 A to F )
Note that if you lose this key, all your backup will be lost, and this can only be changed when the remote is empty
@robyt your doiing incremental backup with 2 step : complete backup ( full/key disks) and delta (differencing/incremental) Both of theses are transfered through an incremental mirror
on the other hand if you do Backup , it build one xva file per VM containing all the VM data at each backup. These are transfered through a Full backup mirror
we are working on clarifying the vocabulary