@manilx yes, that's it
we should change the "full backup interval" to "base/complete backup interval", to clarify what is a full . And we will do it.
@manilx yes, that's it
we should change the "full backup interval" to "base/complete backup interval", to clarify what is a full . And we will do it.
we are reworking the import code , along the new backup code ( https://xcp-ng.org/forum/topic/10664/our-future-backup-code-test-it ) to unify the differetn disk transformation path
For now:
@andrewreid Yes, I can't wait to share it with everybody.
@Andrew then again, with such a precise report, the fix is easier
the fix should join master soon
found it , there was not one, but 2 bugs
albeit small they were putting the xo backup in a incorrect state
the fix is here, and will be merged as soon as possible and then released as a patch for xoa users
https://github.com/vatesfr/xen-orchestra/pull/8780
really thank you everyone for the information
@RobWhalley saved the day
We are still working on it to better understand why it fixes and improve the situation
@RobWhalley nice work, I am testing it right now
@olivierlambert Nice catch
CBT will only be enabled when using purge data , I will fix the first one
(this is because CBT does not improve things without purge snaphot data , but has some edge case , so at least we won't enable it for nothing )
@olivierlambert @Gheppy that is a nice catch and it gives us a interesting clue
I am currently working on it
@JB Doyou have anything related in journalctl ? like can't connect through NBD, fall back to stream export
or can't compute delta ${vdiRef} from ${baseRef}, fallBack to a full
or anything starting by 'xo:xapi:xapi-disks'
@JB this means that it should have done a delta ( as per the full backup interval ) , but had to fall back to a full for at least one disk
this can happens after a failed transfer, a new disk added and some edge case. This issue was not really visible before the latest release, even if the impacts can be important , saturating network and storage.
We are investigating this ( especially @Bastien-Nollet ) , and expect to have a fix and/or an explanation fast
Are you using "purge snapshot data" option ? Are there anything on the journalctl logs ?
@cgasser
are the backup job running normally (if any ) ?
can you disable temporary the backup network ? the rrd stats should be visible immediately afte if this is the root cause
@cgasser that means that if you use a backup network, and if your xo(a) can't access this network, the backup will fails, but also the stats .
do you have a default backup network set on the advanced tab of the pool view ?
the rrd go through this
@idar21 said in Xen Orchestra 5.110 V2V not working:
Don't intend to bump in but the new migration tool isn't working as per the release notes.
I had similar issues, there is no warm migration. My testing against esxi v7, resulted in:
.Abrupt power off of source VM on esxi.
.VM disks start copying. I can see disk copy progress in tasks.
.Migration tasks fails but multiple disks of the source VM keeps on copying.
.when all the disks are copied, there is no VM with the name available in xcp.
.All disks are labeled orphaned under health in xo..Where is the pause/resume function as stated in the release notes.
I don't think the tool has been tested properly. The only difference from older migration tool to this one is progress of disk copying. Otherwise nothing new. The old tool could only do cold migrations and had issues with vms with multiple disks. The new can also only do cold migrations and still has issues with multiple disks migrations.
First, I would like to say again that latest can be fresh, and that we know that we ask for our users to be more inventive with latest, in exchange for faster features. Even more for users from source.
The documentation is still in the work, and will be ready for sure before this reach "xoa stable". The resume part don't have a dedicated interface : you do a first migration without enabling the "stop source", and then, later you launch the same migration with stop source enabled ( or VM stopped ) and it will reuse the already transfered data if the prerequisites are validated.
Then debugging an issue with migration is quite complex, since it's involve multiple systems, and we won't have any access, nor control on the vmware part. It's even harder without a tunnel.
I will need you to look at your journalctl and check for errors during migration . Also are the failing disks sharing some specific configuration? what storage do they uses ? Is there something relevant on the xcp side ?
@Andrew then again, with such a precise report, the fix is easier
the fix should join master soon
@farokh the VM (from source or target) are not started automatically
After the migration, the VM from on the xcp-ng host shoud be ready to start, with snapshot corresponding to the steps of the replication
this is not automatic because, for now, we put a lot of work on the disk data transfer, but there is much to do around it to ensure the VM start exactly as intended : guest tools, advanced network or storage configuration.
@farokh do you have at least one snapshot on the source ? The data migrated while the VM is running are the data before the last snapshot
The power off is effectively a call to powerOff, is there a better alternative (that can work with or without the vmware tools installed) ?
the progress bar are visible in the task view, or on the disk tab on the VM being imported ( not yet in the form , this will probably wait for the XO6 version of this page )
@farokh If my understanding it correct : it migrated the source VM correctly , without breaking anything on esxi side, but it would be better if the V2V gives more feedback to the user ?
Also, you would like to be able to select different networks as target ?