XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    Potential bug with Windows VM backup: "Body Timeout Error"

    Scheduled Pinned Locked Moved Backup
    20 Posts 7 Posters 1.6k Views 7 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • olivierlambertO Offline
      olivierlambert Vates 🪐 Co-Founder CEO
      last edited by

      That's a very interesting result 🙂 It means the problem is either an interaction between XO and XAPI, or on XO's side, but not simply an XCP-ng issue as we could have thought initially 🤔

      Can you check if the XVA file seems to work when importing it? (it case xe fails silently). Use xe vm-import.

      H 1 Reply Last reply Reply Quote 0
      • H Offline
        Hex @olivierlambert
        last edited by

        @olivierlambert
        Tested on one of the previous exports and import with " xe vm-import" was successful. VM Windows OS starts normally.

        1 Reply Last reply Reply Quote 0
        • olivierlambertO Offline
          olivierlambert Vates 🪐 Co-Founder CEO
          last edited by

          So maybe there's a timeout that's too long for XO. Adding @florent and/or @julien-f in the loop.

          G 1 Reply Last reply Reply Quote 0
          • G Offline
            Greg_E @olivierlambert
            last edited by

            @olivierlambert

            Boosting this because it looks like I have a Windows Server 2022 that is going to keep failing. It also has more then 150GB of free space and I was thinking of shrinking it down (if only I could pull a good backup in case it breaks). I no longer need that much space.

            That said, a Linux VM with more free space went zooming right along, way faster than the Server 2022 that I was also backing up at the same time. This other Server 2022 succeeded, but I'll want to try a second on all my Windows backups to make sure they work before starting them on a schedule.

            I saw a Delta style mentioned above, mine fails with a Delta too. The snapshot is created, then the file compression and file copy starts, and this is where things fail.

            Writing out to an NFS share, but I might try backing up across my router to my lab which has an SMB share for backup testing.

            I'm using XCP-ng 8.2.x for and XO from sources with commit d7e64.

            I'm migrating that VM from one storage device to another to see if that might be part of the issue, once it is done I'll give this backup another try.

            G 1 Reply Last reply Reply Quote 0
            • G Offline
              Greg_E @Greg_E
              last edited by Greg_E

              @Greg_E

              Not sure if this helps, I was able to get this VM to backup using no compression. Now I'm going to make the drive smaller to remove most of the free space and see if compression works.

              This VM had almost 400GB of free space, and I no longer need this much since Microsoft deprecated a feature I was using after win10, all my clients have been moved to win11.

              I have one more "big" Windows VM that probably has a bunch of space I can reclaim, or I'll just go without compression for that one.

              And this is only a Windows issue, my biggest Linux VM also has a lot of free space to hold disk images for deployment, and it was FAST compared to a windows backup.

              [late edit] I forgot that this is a process. The Recovery partition sits at the end of disk space, so shrinking the main partition will leave uncommitted space between them, and not shrink anything at all. What I've done in the past was to boot to a Linux disk and use Gparted to move the Recovery where it needed to be. This machine is one of my domain controllers and will need to wait until I have "idle" time on the system to shut it down and do this, maybe tomorrow if I'm lucky. Since I have more than one, I generally shouldn't need to worry, but I still try to work around other users.

              partition.png

              1 Reply Last reply Reply Quote 0
              • olivierlambertO Offline
                olivierlambert Vates 🪐 Co-Founder CEO
                last edited by

                My previous ping didn't work so I will try my luck with @lsouai-vates 😛

                lsouai-vatesL 1 Reply Last reply Reply Quote 0
                • lsouai-vatesL Offline
                  lsouai-vates Vates 🪐 Product team XO Team @olivierlambert
                  last edited by

                  @olivierlambert transfered 😉

                  G 1 Reply Last reply Reply Quote 1
                  • G Offline
                    Greg_E @lsouai-vates
                    last edited by Greg_E

                    @lsouai-vates

                    I backed up another Windows Server 2022 that had a lot of free space, setting no compression is the workaround right now. I'll have to get both of these shrunk down to reasonable and see if compression starts working. That's and after lunch task for the second "big" VM. I'll report back after performing the shrink steps on the one I can reboot today.

                    I agree with the working theory way up at the top... The process is still going, counting each empty "block" and "compressing" it, but with no data moving for over 5 minutes, it errors out. And 120-150GB worth of empty space in a Windows VM is enough to hit that timer.

                    Why the Linux machines don't do this? Might be because all of mine are done in less than 10 minutes total, which doesn't leave a lot of time where that timer can run. 3 of my linux with "large" disk went just fine, a couple only took 3 minutes to compress and copy to the remote share.

                    [edit] After shrinking and moving the partitions, I'm finding that XO is not allowed to decrease the size of a "disk", so I might just be stuck with no compression on these two VMs.

                    lsouai-vatesL 1 Reply Last reply Reply Quote 0
                    • lsouai-vatesL Offline
                      lsouai-vates Vates 🪐 Product team XO Team @Greg_E
                      last edited by

                      @florent can you help him?

                      M 1 Reply Last reply Reply Quote 0
                      • M Offline
                        MajorP93 @lsouai-vates
                        last edited by MajorP93

                        Hey,

                        I am experiencing the same issue using XO from sources (commit 4d77b79ce920925691d84b55169ea3b70f7a52f6), Node version 22, Debian 13.

                        I have multiple backup jobs and only one wich is a full backup job is giving me issues.

                        Most VMs can be backed up by this full backup job just fine but some error out with "body timeout error", e.g.:

                                    {
                                      "id": "1762017810483",
                                      "message": "transfer",
                                      "start": 1762017810483,
                                      "status": "failure",
                                      "end": 1762018134258,
                                      "result": {
                                        "name": "BodyTimeoutError",
                                        "code": "UND_ERR_BODY_TIMEOUT",
                                        "message": "Body Timeout Error",
                                        "stack": "BodyTimeoutError: Body Timeout Error\n    at FastTimer.onParserTimeout [as _onTimeout] (/etc/xen-orchestra/node_modules/undici/lib/dispatcher/client-h1.js:646:28)\n    at Timeout.onTick [as _onTimeout] (/etc/xen-orchestra/node_modules/undici/lib/util/timers.js:162:13)\n    at listOnTimeout (node:internal/timers:588:17)\n    at process.processTimers (node:internal/timers:523:7)"
                                      }
                                    }
                        

                        XO from sources VM has 8 vCPU and 8GB RAM.
                        Link speed of the XCP-ng hosts is 50 Gbit/s.
                        XO VM can reach 20 Gbit/s to the NAS in iperf.

                        Zstd is enabled for this backup job.
                        It appears that only big VMs (as in disk size) have this issue.
                        The VMs that have this issue on the full backup job can be backed up just fine via delta backup job.

                        I read in another thread that this issue can be caused by dom0 hardware constrains but dom0 has 16 vCPU and is at ~40% CPU usage while backups are running.
                        RAM usage sits at 2GB out of 8GB used.

                        I changed my full backup job to GZIP compression and will see if this helps.
                        Will report back.
                        I really need compression due to the large virtual disks of some VMs...

                        Best regards
                        MajorP

                        nikadeN 1 Reply Last reply Reply Quote 0
                        • nikadeN Offline
                          nikade Top contributor @MajorP93
                          last edited by

                          @MajorP93 im seeing this as well, I think the issue is related to communication between XO and XCP-NG.
                          I noticed that it doesn't seem to depend on the vdi size in our case, but rather latency between XO and XCP-NG, which are on different sites and connected via IPSEC VPN.

                          1 Reply Last reply Reply Quote 0
                          • First post
                            Last post