Increasing VM disk size breaks Continuous Replication backup
-
Hello,
I'm running into an issue and would like to understand the proper procedure for how to mitigate this going forward.
As the title suggests, increasing a VM disk size breaks a Continuous Replication backup. My backup routine consists of a rolling snapshot, which then copies to two remote SRs using the continuous replication option.
Before modifying the disk size, the backups were running perfectly and they began failing with the errors:
VDI_IO_ERROR(Device I/O errors)
and
all targets have failed, step: writer.transfer()
I know this has to do with either a VDI chain protection or a lack of SR coalesce. I did "Rescan All Disks" and it did not alleviate the issue; was I not being patient enough?
It is working since then, but my main question is am I supposed to rescan all disks after making that VM disk modification so future backup routines don't fail?
Thank you!
-
Hello,
Yeah I think we fixed that upstream in XenServer, but for some reasons, the issue is back
-
I have the same problem.... I just deleted the snapshots and replicated server and let it start over. Not the best option but it works. It would be nice if backups worked without deleting the old stuff.
-
Sorry, I should have specified my version:
xo-server 5.84.2
xo-web 5.90.0
-
It's not an XO problem, but on XCP-ng storage stack. In Cont. replication, XO won't deal with VHD at all.
-
@olivierlambert Ah duh.
With that being said, the xcp-ng version is 8.2.0.
-
So should I simply be patient and wait for an update?
Or does this warrant a bug report on git?
Thanks
-
Good question. I'll try to find my original bug report to Citrix first.
-
I only vaguely remember that we had a similar issue with Xen Orchestra coalescing VHDs (on our side) and suggested a fix to Citrix, but I can't find it anymore
-
@julien-f does it ring any bell? (at least in Xen Orchestra). We had to adapt our VHD code to handle growing VHDs being merged.
-
Hello, sorry to bug but I was curious on the status of this. Was there any bug fixes made to alleviate this issue?
Thanks
-
I don't think we had time to investigate it sadly.