@andyh no cbt should be disabled, u can’t migrate an cbt enabled vdi.
Best posts made by rtjdamen
-
RE: CBT: the thread to centralize your feedback
-
RE: CBT: the thread to centralize your feedback
You need to remove all snapshots before migration and disable cbt. Storage migration is not supported when cbt is invalid. I believe xoa should do this automatically however.
-
RE: Question on backup sequence
@florent ok thanks, but how does this work for a job with multiple retentions? if i have a job with 3 schedules and retentions set for this job, how does the sequence handle this, in other words as the retention is set on the schedule level and i disable 1 of 3 schedules, how does the sequence know what retention it should keep?
-
RE: CBT: the thread to centralize your feedback
@Andrew we see the same behavior here, no strange backup issues so far!
-
RE: CBT: the thread to centralize your feedback
Hi all, i can confirm the vdi_in_use error is resolved by https://github.com/vatesfr/xen-orchestra/pull/7960 we no longer see issues there.
Only remaining issue we see is the “ can’t create a stream from a metadata vdi fall back to base”
-
RE: Question about mirror backups
@olivierlambert i have changed it to V3 and it feels like it is performing normally now. i need to do some more testing but it seems the synology does handle the read/writes better on V3 then V4.
-
RE: Question about mirror backups
@olivierlambert yes i will check this out as well. The mirror job itself seems to perform well, i changed the nfs to V3 as a first test for now, i read there is more overhead on V4, maybe this will improve things a bit. I will share the results!
-
RE: CBT: the thread to centralize your feedback
@manilx nope, but i have talked with a dev about it and they are looking to make it a setting somewhere, don’t know the status of that. Good to see this works for you!
-
RE: CBT: the thread to centralize your feedback
@olivierlambert, I first want to compliment the work that has been done. As a first release, it already seems very stable. I have shared some logs with support to investigate the data_destroy issue and some minor error messages that appeared. We managed to migrate all our backup jobs to CBT over the weekend. It was challenging to coalesce all the snapshots, but it has been completed. The difference in coalesce speed is significant, which is a great improvement for XOA backups. I will monitor the backups and observe how they evolve in the coming weeks.
Please let us know if you need any additional input or if there are any updates regarding the data_destroy issue.
Latest posts made by rtjdamen
-
RE: CBT: the thread to centralize your feedback
@flakpyro we are still on 8.2 sor maybe there is some difference there.
-
RE: CBT: the thread to centralize your feedback
@flakpyro are u running the latest xcp-ng version 8.2 or 8.3?
-
RE: CBT: the thread to centralize your feedback
@flakpyro i have just tested live migration and offline on our end, both kept the cbt alive. Tested on both iscsi and nfs.
-
RE: CBT: the thread to centralize your feedback
@flakpyro is there any difference in migrating with the vm powered on or powered off?
-
RE: CBT: the thread to centralize your feedback
@flakpyro i can't reproduce this on our end, after migration within pool on the same storage pool the cbt is preserved. When i migrate to a different storage pool the cbt is reset.
-
RE: CBT: the thread to centralize your feedback
@florent sounds like a plan! i will keep an eye on them and let u know the results!
-
RE: CBT: the thread to centralize your feedback
@flakpyro no on my end there is no difference for nfs or iscsi on XOA backups, we only had this issue with Alike Backup, my assumption was this would be the case in XOA as well, but not at this time.
What i found out is that one of my vms facing this issue has 2 disks, both on a different SR, i am moving one of the disks to the same SR and check tomorrow (tonight will be a full after migrating) if this is still the case there.
The other vm we face this issue on is has both disks on the same SR so that cannot be the case there. keep u posted on the results.
-
RE: CBT: the thread to centralize your feedback
@florent thanks for letting me know, on our end this error seems to occur on the same vms every time, it are just a handfull. Could it be these vms are facing higher load on them what causes xapi tasks to take longer then expected?
-
RE: CBT: the thread to centralize your feedback
@chr1st0ph9 i understand a fix is being made for this, @florent patched our proxy yesterday and since then no more fulls so far!