CBT: the thread to centralize your feedback
-
@rtjdamen it's still fresh, but on the other hand, the worse that can happen is falling back to a full backup. So for now I would not use it on the bigger VM ( multi terabytes )
We are sure that it will be a game changer on thick provisioning ( because snapshot cost the full virtual size) or on fast changing VM , where coalescing an older snapshot is a major hurdleIf everything goes well it will be on stable by the end of july, and we'll probably enable it by default on new backup in the near future
-
can't commit, too small for ticket.
typo
preferNbdInformation: 'A network accessible by XO or the proxy must have NBD enabled,. Storage must support Change Block Tracking (CBT) to ue it in a backup',
enabled,.
to ue -
This post is deleted! -
updated to
fix_cbt
branch.CR NBD backup works.
Delta NBD backup works.
just once, so we can't be sure yet.No broken tasks is generated.
Still confused why CBT toggle is enabled on some VMs.
2 similars vms on same pool, same storage, same ubuntu version. One is enabled automaticaly, other is not. -
@florent i did some testing with the data_destroy branch on my lab, it seems to work as required, indeed the snapshot is hidden when it is cbt only.
What i am not shure is correct, when the data destroy action is done, i would expect a snapshot is showing up for coalesce but it does not. Is it too small, and quick removed so it will not be visible in XOA? on larger vms with our production i can see these snapshots showing for coalesce? Or when you do vdi.data_destroy will it try to coalesce directly without garbage collection afterwards?
-
@florent what does happen when we upgrade to this version by the end of july, we do now use NDB without cbt on most backups. will all need to run a full or does it 'convert' the method to the cbt situation? i asume as the checkbox for data destroy will be disabled in general it will not change that much to the backup at day one as long as u not switch to the data destroy option?
-
The transition to CBT should be done smoothly and without any manual intervention. @florent will provide more details on how
-
All tests with 2 vms were so far succesfull, no issues found in our lab. Good job guys!
-
@olivierlambert how many time for us with precompiled XOA?
-
Tomorrow
-
@olivierlambert sounds good!
-
Things are looking good on my end as well.
-
@olivierlambert Looks like it's back to single threaded bottlenecks...
I see a lot of single core 100% utilization on the XO VM.
-
@Andrew Hi Andrew, can't reproduce on my end, all cores utilized at the same time around 30 to 40 % for 2 simultanious backups.
-
@rtjdamen It happens when Continuous Replication is running. The source and destination and network can do 10Gb/sec.
I'll have to work on a better set of conditions and tests to replicate the issue.
I know it's slower because the hourly replication was taking 5-6 minutes and now takes 9-10 minutes. It's more of an issue when the transfer is >300GB.
Just feedback....
-
@Andrew understood! We do not use that at this time.
-
@olivierlambert Hi Olivier, do you have an ETA?
-
Today
-
@olivierlambert Running current XO master (commit 1ace3), using Backup and running a full backup to a NFS remote, it now ignores the
[NOBAK]
on a disk.Also, if you cancel the export task the backup is still marked as Successful. I never tried that before so it may have always done that.
-
@Andrew Thanks for your feedback, we are currently investigating this issue, we have difficulties reproducing this issue though.
Also, if you cancel the export task the backup is still marked as Successful. I never tried that before so it may have always done that.
Yes, this is not supported, the task is only here for informational purposes.