XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    Increasing VM disk size breaks Continuous Replication backup

    Scheduled Pinned Locked Moved Xen Orchestra
    12 Posts 3 Posters 627 Views 1 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • J Offline
      j3458298
      last edited by

      Hello,

      I'm running into an issue and would like to understand the proper procedure for how to mitigate this going forward.

      As the title suggests, increasing a VM disk size breaks a Continuous Replication backup. My backup routine consists of a rolling snapshot, which then copies to two remote SRs using the continuous replication option.

      Before modifying the disk size, the backups were running perfectly and they began failing with the errors:

      VDI_IO_ERROR(Device I/O errors)

      and

      all targets have failed, step: writer.transfer()

      I know this has to do with either a VDI chain protection or a lack of SR coalesce. I did "Rescan All Disks" and it did not alleviate the issue; was I not being patient enough?

      It is working since then, but my main question is am I supposed to rescan all disks after making that VM disk modification so future backup routines don't fail?

      Thank you!

      1 Reply Last reply Reply Quote 0
      • olivierlambertO Offline
        olivierlambert Vates 🪐 Co-Founder CEO
        last edited by

        Hello,

        Yeah I think we fixed that upstream in XenServer, but for some reasons, the issue is back 😕

        1 Reply Last reply Reply Quote 0
        • A Offline
          Andrew Top contributor
          last edited by

          I have the same problem.... I just deleted the snapshots and replicated server and let it start over. Not the best option but it works. It would be nice if backups worked without deleting the old stuff.

          1 Reply Last reply Reply Quote 0
          • J Offline
            j3458298
            last edited by

            Sorry, I should have specified my version:

            xo-server 5.84.2

            xo-web 5.90.0

            1 Reply Last reply Reply Quote 0
            • olivierlambertO Offline
              olivierlambert Vates 🪐 Co-Founder CEO
              last edited by

              It's not an XO problem, but on XCP-ng storage stack. In Cont. replication, XO won't deal with VHD at all.

              J 1 Reply Last reply Reply Quote 0
              • J Offline
                j3458298 @olivierlambert
                last edited by

                @olivierlambert Ah duh.

                With that being said, the xcp-ng version is 8.2.0.

                1 Reply Last reply Reply Quote 0
                • J Offline
                  j3458298
                  last edited by

                  So should I simply be patient and wait for an update?

                  Or does this warrant a bug report on git?

                  Thanks

                  1 Reply Last reply Reply Quote 0
                  • olivierlambertO Offline
                    olivierlambert Vates 🪐 Co-Founder CEO
                    last edited by

                    Good question. I'll try to find my original bug report to Citrix first.

                    1 Reply Last reply Reply Quote 0
                    • olivierlambertO Offline
                      olivierlambert Vates 🪐 Co-Founder CEO
                      last edited by

                      I only vaguely remember that we had a similar issue with Xen Orchestra coalescing VHDs (on our side) and suggested a fix to Citrix, but I can't find it anymore 🤔

                      1 Reply Last reply Reply Quote 0
                      • olivierlambertO Offline
                        olivierlambert Vates 🪐 Co-Founder CEO
                        last edited by

                        @julien-f does it ring any bell? (at least in Xen Orchestra). We had to adapt our VHD code to handle growing VHDs being merged.

                        J 1 Reply Last reply Reply Quote 0
                        • J Offline
                          j3458298 @olivierlambert
                          last edited by

                          @olivierlambert

                          Hello, sorry to bug but I was curious on the status of this. Was there any bug fixes made to alleviate this issue?

                          Thanks

                          1 Reply Last reply Reply Quote 0
                          • olivierlambertO Offline
                            olivierlambert Vates 🪐 Co-Founder CEO
                            last edited by

                            I don't think we had time to investigate it sadly.

                            1 Reply Last reply Reply Quote 0
                            • First post
                              Last post