XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Home
    2. MajorP93
    M
    Online
    • Profile
    • Following 0
    • Followers 0
    • Topics 3
    • Posts 37
    • Groups 0

    MajorP93

    @MajorP93

    7
    Reputation
    4
    Profile views
    37
    Posts
    0
    Followers
    0
    Following
    Joined
    Last Online

    MajorP93 Unfollow Follow

    Best posts made by MajorP93

    • RE: [VDDK V2V] Migration of VM that had more than 1 snapshot creates multiple VHDs

      @florent said in [VDDK V2V] Migration of VM that had more than 1 snapshot creates multiple VHDs:

      @MajorP93 the size are different between the disks, did you modify it since the snapshots ?

      would it be possible to take one new snapshot with the same disk structure ?

      Sorry it was my bad indeed.
      On the VMWare side there are 2 VMs that have almost the exact same name.
      When I checked for disk layout to verify this was an issue I looked at the wrong VM. 🤦

      I checked again and can confirm that the VM in question has 1x 60GiB and 1x 25GiB VMDK.

      So this is not an issue. It is working as intended.

      Thread can be closed / deleted.
      Sorry again and thanks for the replies.

      Best regards
      MajorP

      posted in Xen Orchestra
      M
      MajorP93
    • RE: Long backup times via NFS to Data Domain from Xen Orchestra

      Hey,
      small update:
      while adding the backup section and "diskPerVmConcurrency" option to "/etc/xo-server/config.diskConcurrency.toml" or "~/.config/xo-server/config.diskConcurrency.toml" had no effect for me, I was able to get this working by adding it at the end of my main XO config file at "/etc/xo-server/config.toml".

      Best regards

      posted in Backup
      M
      MajorP93
    • RE: Potential bug with Windows VM backup: "Body Timeout Error"

      I worked around this issue by changing my full backup job to "delta backup" and enabling "force full backup" in the schedule options.

      Delta backup seems more reliable as of now.

      Looking forward to a fix as Zstd compression is an appealing feature of the full backup method.

      posted in Backup
      M
      MajorP93
    • RE: Potential bug with Windows VM backup: "Body Timeout Error"

      I can imagine that a fix could be to send "keepalive" packets in addition to the XCP-ng export-VM-data-stream so that the timeout on XO side does not occur 🤔

      posted in Backup
      M
      MajorP93
    • RE: Potential bug with Windows VM backup: "Body Timeout Error"

      @andriy.sultanov said in Potential bug with Windows VM backup: "Body Timeout Error":

      xe-toolstack-restart

      Okay I was able to replicate the issue.
      This is the setup that I used and that resulted in the "body timeout error" previously discussed in this thread:

      OS: Windows Server 2019 Datacenter
      1.png
      2.png

      The versions of the packages in question that were used in order to replicate the issue (XCP-ng 8.3, fully upgraded):

      [11:58 dat-xcpng-test01 ~]# rpm -q xapi-core
      xapi-core-25.27.0-2.2.xcpng8.3.x86_64
      [11:59 dat-xcpng-test01 ~]# rpm -q qcow-stream-tool
      qcow-stream-tool-25.27.0-2.2.xcpng8.3.x86_64
      [11:59 dat-xcpng-test01 ~]# rpm -q vhd-tool
      vhd-tool-25.27.0-2.2.xcpng8.3.x86_64
      

      Result:
      3.png
      Backup log:

      {
        "data": {
          "mode": "full",
          "reportWhen": "failure"
        },
        "id": "1764585634255",
        "jobId": "b19ed05e-a34f-4fab-b267-1723a7195f4e",
        "jobName": "Full-Backup-Test",
        "message": "backup",
        "scheduleId": "579d937a-cf57-47b2-8cde-4e8325422b15",
        "start": 1764585634255,
        "status": "failure",
        "infos": [
          {
            "data": {
              "vms": [
                "36c492a8-e321-ef2b-94dc-a14e5757d711"
              ]
            },
            "message": "vms"
          }
        ],
        "tasks": [
          {
            "data": {
              "type": "VM",
              "id": "36c492a8-e321-ef2b-94dc-a14e5757d711",
              "name_label": "Win2019_EN_DC_TEST"
            },
            "id": "1764585635692",
            "message": "backup VM",
            "start": 1764585635692,
            "status": "failure",
            "tasks": [
              {
                "id": "1764585635919",
                "message": "snapshot",
                "start": 1764585635919,
                "status": "success",
                "end": 1764585644161,
                "result": "0f548c1f-ce5c-56e3-0259-9c59b7851a17"
              },
              {
                "data": {
                  "id": "f1bc8d14-10dd-4440-bb1d-409b91f3b550",
                  "type": "remote",
                  "isFull": true
                },
                "id": "1764585644192",
                "message": "export",
                "start": 1764585644192,
                "status": "failure",
                "tasks": [
                  {
                    "id": "1764585644201",
                    "message": "transfer",
                    "start": 1764585644201,
                    "status": "failure",
                    "end": 1764586308921,
                    "result": {
                      "name": "BodyTimeoutError",
                      "code": "UND_ERR_BODY_TIMEOUT",
                      "message": "Body Timeout Error",
                      "stack": "BodyTimeoutError: Body Timeout Error\n    at FastTimer.onParserTimeout [as _onTimeout] (/opt/xo/xo-builds/xen-orchestra-202511080402/node_modules/undici/lib/dispatcher/client-h1.js:646:28)\n    at Timeout.onTick [as _onTimeout] (/opt/xo/xo-builds/xen-orchestra-202511080402/node_modules/undici/lib/util/timers.js:162:13)\n    at listOnTimeout (node:internal/timers:588:17)\n    at process.processTimers (node:internal/timers:523:7)"
                    }
                  }
                ],
                "end": 1764586308922,
                "result": {
                  "name": "BodyTimeoutError",
                  "code": "UND_ERR_BODY_TIMEOUT",
                  "message": "Body Timeout Error",
                  "stack": "BodyTimeoutError: Body Timeout Error\n    at FastTimer.onParserTimeout [as _onTimeout] (/opt/xo/xo-builds/xen-orchestra-202511080402/node_modules/undici/lib/dispatcher/client-h1.js:646:28)\n    at Timeout.onTick [as _onTimeout] (/opt/xo/xo-builds/xen-orchestra-202511080402/node_modules/undici/lib/util/timers.js:162:13)\n    at listOnTimeout (node:internal/timers:588:17)\n    at process.processTimers (node:internal/timers:523:7)"
                }
              },
              {
                "id": "1764586443440",
                "message": "clean-vm",
                "start": 1764586443440,
                "status": "success",
                "end": 1764586443459,
                "result": {
                  "merge": false
                }
              },
              {
                "id": "1764586443624",
                "message": "snapshot",
                "start": 1764586443624,
                "status": "success",
                "end": 1764586451966,
                "result": "c3e9736e-d6eb-3669-c7b8-f603333a83bf"
              },
              {
                "data": {
                  "id": "f1bc8d14-10dd-4440-bb1d-409b91f3b550",
                  "type": "remote",
                  "isFull": true
                },
                "id": "1764586452003",
                "message": "export",
                "start": 1764586452003,
                "status": "success",
                "tasks": [
                  {
                    "id": "1764586452008",
                    "message": "transfer",
                    "start": 1764586452008,
                    "status": "success",
                    "end": 1764586686887,
                    "result": {
                      "size": 10464489322
                    }
                  }
                ],
                "end": 1764586686900
              },
              {
                "id": "1764586690122",
                "message": "clean-vm",
                "start": 1764586690122,
                "status": "success",
                "end": 1764586690140,
                "result": {
                  "merge": false
                }
              }
            ],
            "warnings": [
              {
                "data": {
                  "attempt": 1,
                  "error": "Body Timeout Error"
                },
                "message": "Retry the VM backup due to an error"
              }
            ],
            "end": 1764586690142
          }
        ],
        "end": 1764586690143
      }
      

      I then enabled your test repository and installed the packages that you mentioned:

      [12:01 dat-xcpng-test01 ~]# rpm -q xapi-core
      xapi-core-25.27.0-2.3.0.xvafix.1.xcpng8.3.x86_64
      [12:08 dat-xcpng-test01 ~]# rpm -q vhd-tool
      vhd-tool-25.27.0-2.3.0.xvafix.1.xcpng8.3.x86_64
      [12:08 dat-xcpng-test01 ~]# rpm -q qcow-stream-tool
      qcow-stream-tool-25.27.0-2.3.0.xvafix.1.xcpng8.3.x86_64
      

      I restarted tool-stack and re-ran the backup job.
      Unfortunately it did not solve the issue and made the backup behave very strangely:
      9c9e9fdc-8385-4df2-9d23-7b0e4ecee0cd-grafik.png
      The backup job ran only a few seconds and reported that it was "successful". But only 10.83KiB were transferred. There are 18GB used space on this VM. So the data unfortunately was not transferred by the backup job.

      25deccb4-295e-4ce1-a015-159780536122-grafik.png

      Here is the backup log:

      {
        "data": {
          "mode": "full",
          "reportWhen": "failure"
        },
        "id": "1764586964999",
        "jobId": "b19ed05e-a34f-4fab-b267-1723a7195f4e",
        "jobName": "Full-Backup-Test",
        "message": "backup",
        "scheduleId": "579d937a-cf57-47b2-8cde-4e8325422b15",
        "start": 1764586964999,
        "status": "success",
        "infos": [
          {
            "data": {
              "vms": [
                "36c492a8-e321-ef2b-94dc-a14e5757d711"
              ]
            },
            "message": "vms"
          }
        ],
        "tasks": [
          {
            "data": {
              "type": "VM",
              "id": "36c492a8-e321-ef2b-94dc-a14e5757d711",
              "name_label": "Win2019_EN_DC_TEST"
            },
            "id": "1764586966983",
            "message": "backup VM",
            "start": 1764586966983,
            "status": "success",
            "tasks": [
              {
                "id": "1764586967194",
                "message": "snapshot",
                "start": 1764586967194,
                "status": "success",
                "end": 1764586975429,
                "result": "ebe5c4e2-5746-9cb3-7df6-701774a679b5"
              },
              {
                "data": {
                  "id": "f1bc8d14-10dd-4440-bb1d-409b91f3b550",
                  "type": "remote",
                  "isFull": true
                },
                "id": "1764586975453",
                "message": "export",
                "start": 1764586975453,
                "status": "success",
                "tasks": [
                  {
                    "id": "1764586975473",
                    "message": "transfer",
                    "start": 1764586975473,
                    "status": "success",
                    "end": 1764586981992,
                    "result": {
                      "size": 11093
                    }
                  }
                ],
                "end": 1764586982054
              },
              {
                "id": "1764586985271",
                "message": "clean-vm",
                "start": 1764586985271,
                "status": "success",
                "end": 1764586985290,
                "result": {
                  "merge": false
                }
              }
            ],
            "end": 1764586985291
          }
        ],
        "end": 1764586985292
      }
      

      If you need me to test something else or if I should provide some log file from the XCP-ng system please let me know.

      Best regards

      posted in Backup
      M
      MajorP93
    • RE: Potential bug with Windows VM backup: "Body Timeout Error"

      @andriy.sultanov I created a small test setup in our lab. I created a WIndows VM with a lot of free disk space (2 virtual disks, 2.5 TB free space in total). Hopefully that way I will be able to replicate the issue with full backup timeout for VMs with a lot of free space that occurred in our production environment.
      The backup job is currently running. I will report back once it failed and once I had a chance to test if your fix solves the issue.

      posted in Backup
      M
      MajorP93
    • RE: Async.VM.pool_migrate stuck at 57%

      @wmazren I had a similar issue which costed my many hours to troubleshoot.

      I'd advise you to check "dmesg" output within the VM that is not able to get live migrated.

      XCP-ng / Xen behaves different than VMWare regarding live migration.

      XCP-ng will interact with the linux kernel upon live migration and the kernel will try to freeze all processes before performing the live migration.

      In my case a "fuse" process blocked the graceful freezing of all processes and my live migration task also stuck in task view similar to your case.

      After solving the fuse process issue and therefore making the system able to live migrate the issue was gone.

      All of this can be viewed in dmesg as the kernel will tell you about what is being done during live migration via XCP-ng.

      //EDIT: another thing you might want to try is toggling "migration compression" in pool settings as well as making sure you have a dedicated connection / VLAN configured for the live migration. Those 2 things also helped my live migrations being faster and more robust.

      posted in Management
      M
      MajorP93
    • RE: Long backup times via NFS to Data Domain from Xen Orchestra

      @florent Unfortunately it is not working. Yesterday when I checked it was actually backing up a VM which had only 2 disks and by mistake I thought it was one of the VMs with a lot of disks attached to it. Sorry for the confusion.

      It still looks like this
      1a1b9b32-52bb-44e2-9c1f-9427e3925e14-grafik.png

      Even though I added to my config file at /etc/xo-server/config.toml:

      #=====================================================================
      
      # Configuration for backups
      [backups]
      diskPerVmConcurrency = 2
      

      Other settings I defined in /etc/xo-server/config.toml are working just fine. E.g. setting HTTP to HTTPS redirection, SSL certificate or similar.
      So I think Xen Orchestra (XO from sources) reads my config file. It appears that the backup option "diskPerVmConcurrency" does not have any effect at all. No matter in what file I set it.

      Is this setting working for anyone else?

      posted in Backup
      M
      MajorP93

    Latest posts made by MajorP93

    • RE: log_fs_usage / /var/log directory on pool master filling up constantly

      Well I am not entirely sure but in case the effect of SR.scan on logging gets amplified by the size of virtual disks aswell (in the addition to the number of virtual disks) it might be caused by that. I have a few virtual machines that have a) many disks (up to 9) and b) large disks.
      I know it is rather bad design to run VMs this way (in my case these are file servers), I understand that using a NAS and mounting a share is better in this case but I had to migrate these VMs from the old environment and keep them running the way they are.
      That is the only thing I could think of that could result in SR.scan having this big of an impact in my pool.

      posted in XCP-ng
      M
      MajorP93
    • RE: log_fs_usage / /var/log directory on pool master filling up constantly

      @Pilow correct me if I'm wrong but I think day-to-day operations like VM start/stop, SR attach, VDI create, etc. perform explicit storage calls anyway so they should not depend strongly on this periodic SR.scan which is why I considered applying this safe

      posted in XCP-ng
      M
      MajorP93
    • RE: log_fs_usage / /var/log directory on pool master filling up constantly

      I applied

      xe host-param-set other-config:auto-scan-interval=120 uuid=<Host UUID>
      

      on my pool master as suggested by @flakpyro and it had a direct impact on the frequency of SR.scan tasks popping up and the amount of log output!

      I implemented graylog and remote syslog on my XCP-ng pool after posting the first message of this thread and in the image pasted below you can clearly see the effect of "auto-scan-interval" on the logging output.

      9814ad82-f2cd-4a66-a583-0e91fae9c01e-grafik.png

      I will keep monitoring this but it seems to improve things quite substantially!

      Since it appears that multiple users are affected by this it may be a good idea to change the default value within XCP-ng and/or add this to official documentation.

      posted in XCP-ng
      M
      MajorP93
    • RE: log_fs_usage / /var/log directory on pool master filling up constantly

      @flakpyro said in log_fs_usage / /var/log directory on pool master filling up constantly:

      One of our pools. (5 hosts, 6 NFS SRs) had this issue when we first deployed it. I engaged with support from Vates and they changed a setting that reduced the frequency of the SR.scan job from 30 seconds to every 2 mins instead. This totally fixed the issue for us going on a year and a half later.

      I dug back in our documentation and found the command they gave us

          xe host-param-set other-config:auto-scan-interval=120 uuid=<Host UUID> 
      

      Where hosts UUID is your pool master.

      Thank you very much for checking your documentation and sharing your fix!
      I will try your approach on my pool master.

      Best regards

      posted in XCP-ng
      M
      MajorP93
    • RE: log_fs_usage / /var/log directory on pool master filling up constantly

      I did some research and found 2 (old) forum threads where other people encountered the issue that I am currently facing (/var/log full after some time). 1 thread I found in this forum and the other one in Citrix Xenserver forum.
      In both cases it was recommended to check for old / incompatible XenTools since appearently they can cause this exact behaviour of filling up /var/log on the pool master.
      Appearently there even was a confirmed bug in one version of Citrix XenTools for this issue.

      My 105 virtual machines are all either Windows Server or Debian (mixed Debian 11, 12 and 13).
      I am using these XenTools on most of my Debian systems (latest version): https://github.com/xenserver/xe-guest-utilities
      I am using these XenTools on all of my Windows systems: "XenServer VM Tools for Windows 9.4.2", https://www.xenserver.com/downloads

      Are those XenTools expected to cause issues? What are the XenTools expected to work best with fully updated XCP-ng 8.3 as of now?

      Best regards

      posted in XCP-ng
      M
      MajorP93
    • RE: log_fs_usage / /var/log directory on pool master filling up constantly

      @bvitnik Thanks for your reply!

      Yes, all of the XCP-ng hosts have been restarted since I started monitoring the /var/log directory due to package upgrades.
      I also restarted the toolstack 2 or 3 times in the time frame so I think the issue was not caused by some sort of stuck process or similar.

      I did some research in this regard and also noticed that most people that have an environment of my scale do not encounter this issue (I currently have 105 VMs running).
      So I also suspect that there is something unusual happening in my pool.

      I thought about circumventing this issue by implementing a remote syslog server (like graylog) that has enough storage and letting all my XCP-ng hosts write to it.
      I would really prefer to fix the underlying issue though.

      Does anybody possibly know some common things that could cause this that I could check?
      That would be really awesome.

      Thanks and best regards

      posted in XCP-ng
      M
      MajorP93
    • log_fs_usage / /var/log directory on pool master filling up constantly

      Dear XCP-ng community,
      I have an issue with my XCP-ng pool master.
      My /var/log directory is filling up every 2-3 weeks.
      I cleared the content of the directory twice already but I would prefer to fix the underlying issue.
      Due to the directory filling up I get "log_fs_usage" alarm in Xen Orchestra.
      It is a pool consisting of 5 xcp-ng hosts and 2 NFS SR.
      All nodes are fully up to date (regarding package versions) and are running XCP-ng 8.3.
      I thought about increasing the size of the "/dev/md127p5" partition which is mounted at "/var/log" but I read that XCP-ng should be considered an "appliance" and therefore fiddling with dom0 should be avoided most at the times.

      I checked out some log files and it appears like XCP-ng is really chatty. It appears that most relevant services are using some sort of "debug" log level.

      I was also thinking about wether or not this might be related to "SR.scan" tasks. When I remove the default filter in the tasks menu in XO I can see "SR.scan" being performed / repeated almost all the time.
      "SR.scan" is popping up every like 10 seconds.
      Is this expected normal behaviour?

      71db5767-5299-4acc-a658-c4b50d0775d7-grafik.png

      Below is the current content of my /var/log directory.

      [18:12 dat-xcpng01 log]# ls -alh
      insgesamt 2,7G
      drwxr-xr-x  3 root root   20K  1. Dez 18:10 .
      drwxr-xr-x 20 root root  4,0K 29. Okt 18:47 ..
      -rw-------  1 root root   71M  1. Dez 18:12 audit.log
      -rw-------  1 root root  101M  1. Dez 13:22 audit.log.1
      -rw-------  1 root root  9,2M 29. Nov 07:37 audit.log.10.gz
      -rw-------  1 root root  5,9M 29. Nov 01:02 audit.log.11.gz
      -rw-------  1 root root  9,4M 28. Nov 20:31 audit.log.12.gz
      -rw-------  1 root root  9,4M 28. Nov 12:24 audit.log.13.gz
      -rw-------  1 root root  9,4M 28. Nov 06:34 audit.log.14.gz
      -rw-------  1 root root  7,6M 28. Nov 00:42 audit.log.15.gz
      -rw-------  1 root root  9,6M 27. Nov 19:22 audit.log.16.gz
      -rw-------  1 root root  9,4M 27. Nov 12:19 audit.log.17.gz
      -rw-------  1 root root  9,4M 27. Nov 06:13 audit.log.18.gz
      -rw-------  1 root root  9,4M 27. Nov 00:10 audit.log.19.gz
      -rw-------  1 root root  9,5M 26. Nov 18:02 audit.log.20.gz
      -rw-------  1 root root  9,5M 26. Nov 12:11 audit.log.21.gz
      -rw-------  1 root root  9,4M 26. Nov 06:12 audit.log.22.gz
      -rw-------  1 root root  6,4M 26. Nov 00:15 audit.log.23.gz
      -rw-------  1 root root  9,5M 25. Nov 19:39 audit.log.24.gz
      -rw-------  1 root root  9,5M 25. Nov 12:36 audit.log.25.gz
      -rw-------  1 root root  9,4M 25. Nov 06:48 audit.log.26.gz
      -rw-------  1 root root  4,5M 25. Nov 00:42 audit.log.27.gz
      -rw-------  1 root root  9,4M 24. Nov 21:56 audit.log.28.gz
      -rw-------  1 root root  9,4M 24. Nov 14:02 audit.log.29.gz
      -rw-------  1 root root  9,4M  1. Dez 07:30 audit.log.2.gz
      -rw-------  1 root root  9,1M 24. Nov 07:02 audit.log.30.gz
      -rw-------  1 root root  1,1M 24. Nov 00:27 audit.log.31.gz
      -rw-------  1 root root  5,2M  1. Dez 00:27 audit.log.3.gz
      -rw-------  1 root root  9,5M 30. Nov 19:56 audit.log.4.gz
      -rw-------  1 root root  9,4M 30. Nov 12:45 audit.log.5.gz
      -rw-------  1 root root  9,5M 30. Nov 06:46 audit.log.6.gz
      -rw-------  1 root root  1,6M 30. Nov 00:23 audit.log.7.gz
      -rw-------  1 root root  9,2M 29. Nov 23:31 audit.log.8.gz
      -rw-------  1 root root  9,1M 29. Nov 15:16 audit.log.9.gz
      drwxr-xr-x  2 root root  4,0K  1. Dez 17:55 blktap
      -rw-------  1 root root   51K  1. Dez 18:10 cron
      -rw-------  1 root root   69K  1. Dez 00:20 cron.1
      -rw-------  1 root root  4,4K 22. Nov 00:10 cron.10.gz
      -rw-------  1 root root  4,3K 21. Nov 00:01 cron.11.gz
      -rw-------  1 root root  4,4K 20. Nov 00:30 cron.12.gz
      -rw-------  1 root root  4,3K 19. Nov 00:01 cron.13.gz
      -rw-------  1 root root  4,5K 18. Nov 00:45 cron.14.gz
      -rw-------  1 root root  4,4K 17. Nov 00:01 cron.15.gz
      -rw-------  1 root root  4,3K 16. Nov 00:00 cron.16.gz
      -rw-------  1 root root  4,4K 15. Nov 00:30 cron.17.gz
      -rw-------  1 root root  4,0K 14. Nov 00:20 cron.18.gz
      -rw-------  1 root root  4,7K 13. Nov 02:50 cron.19.gz
      -rw-------  1 root root  4,3K 30. Nov 00:20 cron.2.gz
      -rw-------  1 root root  4,5K 29. Nov 01:01 cron.3.gz
      -rw-------  1 root root  4,4K 28. Nov 00:40 cron.4.gz
      -rw-------  1 root root  4,3K 27. Nov 00:10 cron.5.gz
      -rw-------  1 root root  4,3K 26. Nov 00:15 cron.6.gz
      -rw-------  1 root root  4,4K 25. Nov 00:40 cron.7.gz
      -rw-------  1 root root  4,4K 24. Nov 00:20 cron.8.gz
      -rw-------  1 root root  4,4K 23. Nov 00:20 cron.9.gz
      -rw-------  1 root root  5,1M  1. Dez 18:12 daemon.log
      -rw-------  1 root root  6,0M  1. Dez 00:27 daemon.log.1
      -rw-------  1 root root  506K 22. Nov 00:10 daemon.log.10.gz
      -rw-------  1 root root  440K 21. Nov 00:09 daemon.log.11.gz
      -rw-------  1 root root  482K 20. Nov 00:38 daemon.log.12.gz
      -rw-------  1 root root  456K 19. Nov 00:08 daemon.log.13.gz
      -rw-------  1 root root  1,1M 18. Nov 00:45 daemon.log.14.gz
      -rw-------  1 root root  451K 17. Nov 00:03 daemon.log.15.gz
      -rw-------  1 root root  490K 16. Nov 00:00 daemon.log.16.gz
      -rw-------  1 root root  347K 15. Nov 00:31 daemon.log.17.gz
      -rw-------  1 root root  337K 14. Nov 00:24 daemon.log.18.gz
      -rw-------  1 root root  587K 13. Nov 02:55 daemon.log.19.gz
      -rw-------  1 root root  458K 30. Nov 00:23 daemon.log.2.gz
      -rw-------  1 root root  527K 29. Nov 01:02 daemon.log.3.gz
      -rw-------  1 root root  521K 28. Nov 00:42 daemon.log.4.gz
      -rw-------  1 root root  532K 27. Nov 00:10 daemon.log.5.gz
      -rw-------  1 root root  444K 26. Nov 00:15 daemon.log.6.gz
      -rw-------  1 root root  534K 25. Nov 00:42 daemon.log.7.gz
      -rw-------  1 root root  460K 24. Nov 00:27 daemon.log.8.gz
      -rw-------  1 root root  466K 23. Nov 00:22 daemon.log.9.gz
      -rw-------  1 root root   32K  1. Dez 17:55 kern.log
      -rw-------  1 root root   36K 30. Nov 16:28 kern.log.1
      -rw-------  1 root root  8,0K 22. Nov 00:03 kern.log.10.gz
      -rw-------  1 root root  6,5K 21. Nov 00:09 kern.log.11.gz
      -rw-------  1 root root  7,1K 20. Nov 00:06 kern.log.12.gz
      -rw-------  1 root root  6,9K 19. Nov 00:08 kern.log.13.gz
      -rw-------  1 root root  7,8K 17. Nov 23:49 kern.log.14.gz
      -rw-------  1 root root  6,6K 17. Nov 00:03 kern.log.15.gz
      -rw-------  1 root root  7,3K 15. Nov 23:54 kern.log.16.gz
      -rw-------  1 root root  3,8K 15. Nov 00:24 kern.log.17.gz
      -rw-------  1 root root  4,6K 14. Nov 00:19 kern.log.18.gz
      -rw-------  1 root root  5,4K 13. Nov 02:55 kern.log.19.gz
      -rw-------  1 root root  7,0K 30. Nov 00:11 kern.log.2.gz
      -rw-------  1 root root  8,0K 28. Nov 23:38 kern.log.3.gz
      -rw-------  1 root root  8,1K 28. Nov 00:40 kern.log.4.gz
      -rw-------  1 root root  8,2K 27. Nov 00:10 kern.log.5.gz
      -rw-------  1 root root  6,4K 26. Nov 00:13 kern.log.6.gz
      -rw-------  1 root root  7,9K 25. Nov 00:42 kern.log.7.gz
      -rw-------  1 root root  7,1K 23. Nov 23:49 kern.log.8.gz
      -rw-------  1 root root  7,0K 22. Nov 23:54 kern.log.9.gz
      -rw-r--r--  1 root root     0 30. Okt 19:00 lvm-plugin.log
      -rw-------  1 root root   14K  1. Dez 18:10 maillog
      -rw-------  1 root root   19K  1. Dez 00:20 maillog.1
      -rw-------  1 root root  1,7K 22. Nov 00:10 maillog.10.gz
      -rw-------  1 root root  1,7K 21. Nov 00:00 maillog.11.gz
      -rw-------  1 root root  1,8K 20. Nov 00:30 maillog.12.gz
      -rw-------  1 root root  1,7K 19. Nov 00:00 maillog.13.gz
      -rw-------  1 root root  1,8K 18. Nov 00:40 maillog.14.gz
      -rw-------  1 root root  1,8K 17. Nov 00:00 maillog.15.gz
      -rw-------  1 root root  1,7K 16. Nov 00:00 maillog.16.gz
      -rw-------  1 root root  1,8K 15. Nov 00:30 maillog.17.gz
      -rw-------  1 root root  1,6K 14. Nov 00:20 maillog.18.gz
      -rw-------  1 root root  1,9K 13. Nov 02:50 maillog.19.gz
      -rw-------  1 root root  1,7K 30. Nov 00:20 maillog.2.gz
      -rw-------  1 root root  1,8K 29. Nov 01:00 maillog.3.gz
      -rw-------  1 root root  1,8K 28. Nov 00:40 maillog.4.gz
      -rw-------  1 root root  1,8K 27. Nov 00:10 maillog.5.gz
      -rw-------  1 root root  1,7K 26. Nov 00:10 maillog.6.gz
      -rw-------  1 root root  1,8K 25. Nov 00:40 maillog.7.gz
      -rw-------  1 root root  1,7K 24. Nov 00:20 maillog.8.gz
      -rw-------  1 root root  1,7K 23. Nov 00:20 maillog.9.gz
      -rw-r--r--  1 root root     0 30. Okt 22:25 raid-plugin.log
      -rw-------  1 root root   21M  1. Dez 18:12 secure
      -rw-------  1 root root   33M  1. Dez 00:27 secure.1
      -rw-------  1 root root  1,4M 22. Nov 00:11 secure.10.gz
      -rw-------  1 root root  1,2M 21. Nov 00:09 secure.11.gz
      -rw-------  1 root root  1,2M 20. Nov 00:38 secure.12.gz
      -rw-------  1 root root  1,1M 19. Nov 00:08 secure.13.gz
      -rw-------  1 root root  1,2M 18. Nov 00:45 secure.14.gz
      -rw-------  1 root root  1,1M 17. Nov 00:03 secure.15.gz
      -rw-------  1 root root  2,2M 16. Nov 00:00 secure.16.gz
      -rw-------  1 root root  3,0M 15. Nov 00:32 secure.17.gz
      -rw-------  1 root root  3,1M 14. Nov 00:24 secure.18.gz
      -rw-------  1 root root  3,0M 13. Nov 02:55 secure.19.gz
      -rw-------  1 root root  1,1M 30. Nov 00:23 secure.2.gz
      -rw-------  1 root root  1,8M 29. Nov 01:02 secure.3.gz
      -rw-------  1 root root  2,3M 28. Nov 00:42 secure.4.gz
      -rw-------  1 root root  4,4M 27. Nov 00:10 secure.5.gz
      -rw-------  1 root root  3,3M 26. Nov 00:15 secure.6.gz
      -rw-------  1 root root  1,6M 25. Nov 00:42 secure.7.gz
      -rw-------  1 root root  1,1M 24. Nov 00:27 secure.8.gz
      -rw-------  1 root root  1,1M 23. Nov 00:22 secure.9.gz
      -rw-r--r--  1 root root     0  2. Nov 11:58 smartctl-plugin.log
      -rw-------  1 root root  4,3M  1. Dez 18:12 SMlog
      -rw-------  1 root root  101M  1. Dez 18:09 SMlog.1
      -rw-------  1 root root   44M  1. Dez 04:42 SMlog.10.gz
      -rw-------  1 root root   44M  1. Dez 03:38 SMlog.11.gz
      -rw-------  1 root root   44M  1. Dez 02:35 SMlog.12.gz
      -rw-------  1 root root   44M  1. Dez 01:32 SMlog.13.gz
      -rw-------  1 root root   43M  1. Dez 00:27 SMlog.14.gz
      -rw-------  1 root root   43M 30. Nov 23:15 SMlog.15.gz
      -rw-------  1 root root   43M 30. Nov 21:58 SMlog.16.gz
      -rw-------  1 root root   44M 30. Nov 20:44 SMlog.17.gz
      -rw-------  1 root root   43M 30. Nov 19:41 SMlog.18.gz
      -rw-------  1 root root   43M 30. Nov 18:38 SMlog.19.gz
      -rw-------  1 root root   43M 30. Nov 17:34 SMlog.20.gz
      -rw-------  1 root root   42M 30. Nov 16:31 SMlog.21.gz
      -rw-------  1 root root   43M 30. Nov 15:24 SMlog.22.gz
      -rw-------  1 root root   43M 30. Nov 14:20 SMlog.23.gz
      -rw-------  1 root root   44M 30. Nov 13:15 SMlog.24.gz
      -rw-------  1 root root   44M 30. Nov 12:14 SMlog.25.gz
      -rw-------  1 root root   45M 30. Nov 11:09 SMlog.26.gz
      -rw-------  1 root root   46M 30. Nov 09:34 SMlog.27.gz
      -rw-------  1 root root   46M 30. Nov 08:35 SMlog.28.gz
      -rw-------  1 root root   46M 30. Nov 07:37 SMlog.29.gz
      -rw-------  1 root root   46M  1. Dez 17:07 SMlog.2.gz
      -rw-------  1 root root   45M 30. Nov 06:39 SMlog.30.gz
      -rw-------  1 root root   45M 30. Nov 05:39 SMlog.31.gz
      -rw-------  1 root root   45M  1. Dez 16:07 SMlog.3.gz
      -rw-------  1 root root   45M  1. Dez 15:07 SMlog.4.gz
      -rw-------  1 root root   45M  1. Dez 14:08 SMlog.5.gz
      -rw-------  1 root root   46M  1. Dez 13:04 SMlog.6.gz
      -rw-------  1 root root   43M  1. Dez 12:05 SMlog.7.gz
      -rw-------  1 root root   46M  1. Dez 06:51 SMlog.8.gz
      -rw-------  1 root root   43M  1. Dez 05:52 SMlog.9.gz
      -rw-r--r--  1 root root     0 30. Okt 19:00 updater-plugin.log
      -rw-------  1 root root   23K  1. Dez 18:10 user.log
      -rw-------  1 root root   12K 30. Nov 23:44 user.log.1
      -rw-------  1 root root  3,7K 21. Nov 23:55 user.log.10.gz
      -rw-------  1 root root  1,5K 21. Nov 00:04 user.log.11.gz
      -rw-------  1 root root  1,5K 20. Nov 00:04 user.log.12.gz
      -rw-------  1 root root  1,6K 19. Nov 00:06 user.log.13.gz
      -rw-------  1 root root  3,1K 17. Nov 23:49 user.log.14.gz
      -rw-------  1 root root  1,9K 16. Nov 23:41 user.log.15.gz
      -rw-------  1 root root  1,5K 15. Nov 23:54 user.log.16.gz
      -rw-------  1 root root  1,2K 15. Nov 00:28 user.log.17.gz
      -rw-------  1 root root  9,2K 14. Nov 00:05 user.log.18.gz
      -rw-------  1 root root   17K 13. Nov 02:47 user.log.19.gz
      -rw-------  1 root root  1,4K 30. Nov 00:21 user.log.2.gz
      -rw-------  1 root root  4,8K 28. Nov 23:52 user.log.3.gz
      -rw-------  1 root root  1,5K 28. Nov 00:36 user.log.4.gz
      -rw-------  1 root root  1,6K 27. Nov 00:02 user.log.5.gz
      -rw-------  1 root root  1,4K 26. Nov 00:14 user.log.6.gz
      -rw-------  1 root root  3,2K 25. Nov 00:38 user.log.7.gz
      -rw-------  1 root root  1,7K 23. Nov 23:59 user.log.8.gz
      -rw-------  1 root root  1,3K 22. Nov 23:53 user.log.9.gz
      -rw-------  1 root root   13K  1. Dez 18:00 VMSSlog
      -rw-------  1 root root   17K  1. Dez 00:15 VMSSlog.1
      -rw-------  1 root root  1,3K 22. Nov 00:00 VMSSlog.10.gz
      -rw-------  1 root root  1,3K 21. Nov 00:00 VMSSlog.11.gz
      -rw-------  1 root root  1,3K 20. Nov 00:30 VMSSlog.12.gz
      -rw-------  1 root root  1,3K 19. Nov 00:00 VMSSlog.13.gz
      -rw-------  1 root root  1,4K 18. Nov 00:45 VMSSlog.14.gz
      -rw-------  1 root root  1,3K 17. Nov 00:00 VMSSlog.15.gz
      -rw-------  1 root root  1,3K 16. Nov 00:00 VMSSlog.16.gz
      -rw-------  1 root root  1,3K 15. Nov 00:30 VMSSlog.17.gz
      -rw-------  1 root root  1,2K 14. Nov 00:15 VMSSlog.18.gz
      -rw-------  1 root root  1,4K 13. Nov 02:45 VMSSlog.19.gz
      -rw-------  1 root root  1,3K 30. Nov 00:15 VMSSlog.2.gz
      -rw-------  1 root root  1,3K 29. Nov 01:00 VMSSlog.3.gz
      -rw-------  1 root root  1,3K 28. Nov 00:30 VMSSlog.4.gz
      -rw-------  1 root root  1,3K 27. Nov 00:00 VMSSlog.5.gz
      -rw-------  1 root root  1,3K 26. Nov 00:15 VMSSlog.6.gz
      -rw-------  1 root root  1,3K 25. Nov 00:30 VMSSlog.7.gz
      -rw-------  1 root root  1,3K 24. Nov 00:15 VMSSlog.8.gz
      -rw-------  1 root root  1,3K 23. Nov 00:15 VMSSlog.9.gz
      -rw-------  1 root root  123K  1. Dez 18:11 xcp-rrdd-plugins.log
      -rw-------  1 root root  161K  1. Dez 00:23 xcp-rrdd-plugins.log.1
      -rw-------  1 root root   24K 22. Nov 00:08 xcp-rrdd-plugins.log.10.gz
      -rw-------  1 root root   20K 21. Nov 00:09 xcp-rrdd-plugins.log.11.gz
      -rw-------  1 root root   22K 20. Nov 00:37 xcp-rrdd-plugins.log.12.gz
      -rw-------  1 root root   21K 19. Nov 00:08 xcp-rrdd-plugins.log.13.gz
      -rw-------  1 root root   23K 18. Nov 00:45 xcp-rrdd-plugins.log.14.gz
      -rw-------  1 root root   20K 17. Nov 00:03 xcp-rrdd-plugins.log.15.gz
      -rw-------  1 root root   22K 15. Nov 23:59 xcp-rrdd-plugins.log.16.gz
      -rw-------  1 root root   14K 15. Nov 00:30 xcp-rrdd-plugins.log.17.gz
      -rw-------  1 root root   14K 14. Nov 00:20 xcp-rrdd-plugins.log.18.gz
      -rw-------  1 root root   16K 13. Nov 02:55 xcp-rrdd-plugins.log.19.gz
      -rw-------  1 root root   21K 30. Nov 00:21 xcp-rrdd-plugins.log.2.gz
      -rw-------  1 root root   23K 29. Nov 00:59 xcp-rrdd-plugins.log.3.gz
      -rw-------  1 root root   24K 28. Nov 00:41 xcp-rrdd-plugins.log.4.gz
      -rw-------  1 root root   26K 27. Nov 00:10 xcp-rrdd-plugins.log.5.gz
      -rw-------  1 root root   19K 26. Nov 00:14 xcp-rrdd-plugins.log.6.gz
      -rw-------  1 root root   25K 25. Nov 00:42 xcp-rrdd-plugins.log.7.gz
      -rw-------  1 root root   21K 24. Nov 00:24 xcp-rrdd-plugins.log.8.gz
      -rw-------  1 root root   21K 23. Nov 00:19 xcp-rrdd-plugins.log.9.gz
      -rw-------  1 root root  508K  1. Dez 18:12 xensource.log
      -rw-------  1 root root  101M  1. Dez 18:10 xensource.log.1
      -rw-------  1 root root   11M 29. Nov 12:21 xensource.log.10.gz
      -rw-------  1 root root   11M 29. Nov 06:24 xensource.log.11.gz
      -rw-------  1 root root  3,3M 29. Nov 01:02 xensource.log.12.gz
      -rw-------  1 root root   11M 28. Nov 23:17 xensource.log.13.gz
      -rw-------  1 root root   11M 28. Nov 17:43 xensource.log.14.gz
      -rw-------  1 root root   11M 28. Nov 11:28 xensource.log.15.gz
      -rw-------  1 root root   11M 28. Nov 06:00 xensource.log.16.gz
      -rw-------  1 root root  3,5M 28. Nov 00:42 xensource.log.17.gz
      -rw-------  1 root root   11M 27. Nov 23:20 xensource.log.18.gz
      -rw-------  1 root root   11M 27. Nov 17:16 xensource.log.19.gz
      -rw-------  1 root root   11M 27. Nov 10:44 xensource.log.20.gz
      -rw-------  1 root root   11M 27. Nov 05:18 xensource.log.21.gz
      -rw-------  1 root root  2,5M 27. Nov 00:10 xensource.log.22.gz
      -rw-------  1 root root   11M 26. Nov 23:17 xensource.log.23.gz
      -rw-------  1 root root   11M 26. Nov 17:17 xensource.log.24.gz
      -rw-------  1 root root   11M 26. Nov 10:54 xensource.log.25.gz
      -rw-------  1 root root   11M 26. Nov 05:34 xensource.log.26.gz
      -rw-------  1 root root   11M 26. Nov 00:15 xensource.log.27.gz
      -rw-------  1 root root   11M 25. Nov 18:21 xensource.log.28.gz
      -rw-------  1 root root   11M 25. Nov 11:42 xensource.log.29.gz
      -rw-------  1 root root   11M  1. Dez 12:05 xensource.log.2.gz
      -rw-------  1 root root   11M 25. Nov 06:08 xensource.log.30.gz
      -rw-------  1 root root  8,8M 25. Nov 00:42 xensource.log.31.gz
      -rw-------  1 root root   11M 24. Nov 20:45 xensource.log.32.gz
      -rw-------  1 root root   11M 24. Nov 14:04 xensource.log.33.gz
      -rw-------  1 root root   11M 24. Nov 07:09 xensource.log.34.gz
      -rw-------  1 root root   11M 24. Nov 00:27 xensource.log.35.gz
      -rw-------  1 root root   11M 23. Nov 16:56 xensource.log.36.gz
      -rw-------  1 root root   11M 23. Nov 07:15 xensource.log.37.gz
      -rw-------  1 root root   11M 23. Nov 00:22 xensource.log.38.gz
      -rw-------  1 root root   11M 22. Nov 16:53 xensource.log.39.gz
      -rw-------  1 root root   11M  1. Dez 06:35 xensource.log.3.gz
      -rw-------  1 root root   11M 22. Nov 07:21 xensource.log.40.gz
      -rw-------  1 root root  1,9M 22. Nov 00:11 xensource.log.41.gz
      -rw-------  1 root root   11M 21. Nov 23:15 xensource.log.42.gz
      -rw-------  1 root root   11M 21. Nov 15:27 xensource.log.43.gz
      -rw-------  1 root root   11M 21. Nov 07:04 xensource.log.44.gz
      -rw-------  1 root root   11M 21. Nov 00:09 xensource.log.45.gz
      -rw-------  1 root root   11M 20. Nov 16:04 xensource.log.46.gz
      -rw-------  1 root root   11M 20. Nov 07:27 xensource.log.47.gz
      -rw-------  1 root root  1,7M 20. Nov 00:38 xensource.log.48.gz
      -rw-------  1 root root   11M 19. Nov 23:43 xensource.log.49.gz
      -rw-------  1 root root  9,8M  1. Dez 00:27 xensource.log.4.gz
      -rw-------  1 root root   11M 19. Nov 15:38 xensource.log.50.gz
      -rw-------  1 root root   11M 19. Nov 07:02 xensource.log.51.gz
      -rw-------  1 root root   11M 19. Nov 00:08 xensource.log.52.gz
      -rw-------  1 root root   11M 18. Nov 16:26 xensource.log.53.gz
      -rw-------  1 root root   11M 18. Nov 07:35 xensource.log.54.gz
      -rw-------  1 root root  2,6M 18. Nov 00:45 xensource.log.55.gz
      -rw-------  1 root root   11M 17. Nov 23:17 xensource.log.56.gz
      -rw-------  1 root root   11M 17. Nov 15:08 xensource.log.57.gz
      -rw-------  1 root root   11M 17. Nov 06:29 xensource.log.58.gz
      -rw-------  1 root root   11M 17. Nov 00:03 xensource.log.59.gz
      -rw-------  1 root root   11M 30. Nov 17:28 xensource.log.5.gz
      -rw-------  1 root root   11M 16. Nov 16:12 xensource.log.60.gz
      -rw-------  1 root root   11M 16. Nov 08:25 xensource.log.61.gz
      -rw-------  1 root root   11M 16. Nov 00:00 xensource.log.62.gz
      -rw-------  1 root root   11M 15. Nov 17:05 xensource.log.63.gz
      -rw-------  1 root root   11M 15. Nov 08:41 xensource.log.64.gz
      -rw-------  1 root root  8,6M 15. Nov 00:32 xensource.log.65.gz
      -rw-------  1 root root   12M 14. Nov 17:52 xensource.log.66.gz
      -rw-------  1 root root   11M 14. Nov 09:12 xensource.log.67.gz
      -rw-------  1 root root  4,7M 14. Nov 00:24 xensource.log.68.gz
      -rw-------  1 root root   12M 13. Nov 21:57 xensource.log.69.gz
      -rw-------  1 root root   11M 30. Nov 11:09 xensource.log.6.gz
      -rw-------  1 root root   11M 13. Nov 12:12 xensource.log.70.gz
      -rw-------  1 root root   11M 30. Nov 05:52 xensource.log.7.gz
      -rw-------  1 root root   11M 30. Nov 00:23 xensource.log.8.gz
      -rw-------  1 root root   11M 29. Nov 19:20 xensource.log.9.gz
      -rw-------  1 root root   13M  1. Dez 18:12 xenstored-access.log
      -rw-------  1 root root   17M  1. Dez 00:27 xenstored-access.log.1
      -rw-------  1 root root  698K 22. Nov 00:11 xenstored-access.log.10.gz
      -rw-------  1 root root  673K 21. Nov 00:09 xenstored-access.log.11.gz
      -rw-------  1 root root  712K 20. Nov 00:37 xenstored-access.log.12.gz
      -rw-------  1 root root  671K 19. Nov 00:08 xenstored-access.log.13.gz
      -rw-------  1 root root  730K 18. Nov 00:45 xenstored-access.log.14.gz
      -rw-------  1 root root  671K 17. Nov 00:03 xenstored-access.log.15.gz
      -rw-------  1 root root  662K 16. Nov 00:00 xenstored-access.log.16.gz
      -rw-------  1 root root  690K 15. Nov 00:32 xenstored-access.log.17.gz
      -rw-------  1 root root  688K 14. Nov 00:24 xenstored-access.log.18.gz
      -rw-------  1 root root  998K 13. Nov 02:55 xenstored-access.log.19.gz
      -rw-------  1 root root  956K 30. Nov 00:23 xenstored-access.log.2.gz
      -rw-------  1 root root  1,1M 29. Nov 01:02 xenstored-access.log.3.gz
      -rw-------  1 root root  1,1M 28. Nov 00:42 xenstored-access.log.4.gz
      -rw-------  1 root root 1002K 27. Nov 00:10 xenstored-access.log.5.gz
      -rw-------  1 root root  976K 26. Nov 00:15 xenstored-access.log.6.gz
      -rw-------  1 root root  902K 25. Nov 00:42 xenstored-access.log.7.gz
      -rw-------  1 root root  672K 24. Nov 00:27 xenstored-access.log.8.gz
      -rw-------  1 root root  675K 23. Nov 00:22 xenstored-access.log.9.gz
      -rw-r--r--  1 root root  6,0K 24. Nov 10:22 xha.log
      -rw-------  1 root root   228 31. Okt 23:44 yum.log
      

      If anybody could give me some pointers on how to further troubleshoot this that would be greatly appreciated.

      Thanks a lot in advance.

      Best regards

      posted in XCP-ng
      M
      MajorP93
    • RE: 🛰️ XO 6: dedicated thread for all your feedback!

      First of all thanks for all the hard work that has been done towards reaching the XO 6 milestone!
      I think you guys nailed it visually.
      The difference between XO 5 and XO 6 is night and day!

      I am not sure if you already noticed but in XO 6 when you click on pool --> storage and then click on one of your SR it will open XO 5 in a new tab and show the SR there.
      That is something that I noticed today when I was trying to perform a task in XO 6.

      b6477391-f55f-4642-abce-7b059ea574ca-grafik.png

      Best regards

      posted in Xen Orchestra
      M
      MajorP93
    • RE: Potential bug with Windows VM backup: "Body Timeout Error"

      @andriy.sultanov said in Potential bug with Windows VM backup: "Body Timeout Error":

      xe-toolstack-restart

      Okay I was able to replicate the issue.
      This is the setup that I used and that resulted in the "body timeout error" previously discussed in this thread:

      OS: Windows Server 2019 Datacenter
      1.png
      2.png

      The versions of the packages in question that were used in order to replicate the issue (XCP-ng 8.3, fully upgraded):

      [11:58 dat-xcpng-test01 ~]# rpm -q xapi-core
      xapi-core-25.27.0-2.2.xcpng8.3.x86_64
      [11:59 dat-xcpng-test01 ~]# rpm -q qcow-stream-tool
      qcow-stream-tool-25.27.0-2.2.xcpng8.3.x86_64
      [11:59 dat-xcpng-test01 ~]# rpm -q vhd-tool
      vhd-tool-25.27.0-2.2.xcpng8.3.x86_64
      

      Result:
      3.png
      Backup log:

      {
        "data": {
          "mode": "full",
          "reportWhen": "failure"
        },
        "id": "1764585634255",
        "jobId": "b19ed05e-a34f-4fab-b267-1723a7195f4e",
        "jobName": "Full-Backup-Test",
        "message": "backup",
        "scheduleId": "579d937a-cf57-47b2-8cde-4e8325422b15",
        "start": 1764585634255,
        "status": "failure",
        "infos": [
          {
            "data": {
              "vms": [
                "36c492a8-e321-ef2b-94dc-a14e5757d711"
              ]
            },
            "message": "vms"
          }
        ],
        "tasks": [
          {
            "data": {
              "type": "VM",
              "id": "36c492a8-e321-ef2b-94dc-a14e5757d711",
              "name_label": "Win2019_EN_DC_TEST"
            },
            "id": "1764585635692",
            "message": "backup VM",
            "start": 1764585635692,
            "status": "failure",
            "tasks": [
              {
                "id": "1764585635919",
                "message": "snapshot",
                "start": 1764585635919,
                "status": "success",
                "end": 1764585644161,
                "result": "0f548c1f-ce5c-56e3-0259-9c59b7851a17"
              },
              {
                "data": {
                  "id": "f1bc8d14-10dd-4440-bb1d-409b91f3b550",
                  "type": "remote",
                  "isFull": true
                },
                "id": "1764585644192",
                "message": "export",
                "start": 1764585644192,
                "status": "failure",
                "tasks": [
                  {
                    "id": "1764585644201",
                    "message": "transfer",
                    "start": 1764585644201,
                    "status": "failure",
                    "end": 1764586308921,
                    "result": {
                      "name": "BodyTimeoutError",
                      "code": "UND_ERR_BODY_TIMEOUT",
                      "message": "Body Timeout Error",
                      "stack": "BodyTimeoutError: Body Timeout Error\n    at FastTimer.onParserTimeout [as _onTimeout] (/opt/xo/xo-builds/xen-orchestra-202511080402/node_modules/undici/lib/dispatcher/client-h1.js:646:28)\n    at Timeout.onTick [as _onTimeout] (/opt/xo/xo-builds/xen-orchestra-202511080402/node_modules/undici/lib/util/timers.js:162:13)\n    at listOnTimeout (node:internal/timers:588:17)\n    at process.processTimers (node:internal/timers:523:7)"
                    }
                  }
                ],
                "end": 1764586308922,
                "result": {
                  "name": "BodyTimeoutError",
                  "code": "UND_ERR_BODY_TIMEOUT",
                  "message": "Body Timeout Error",
                  "stack": "BodyTimeoutError: Body Timeout Error\n    at FastTimer.onParserTimeout [as _onTimeout] (/opt/xo/xo-builds/xen-orchestra-202511080402/node_modules/undici/lib/dispatcher/client-h1.js:646:28)\n    at Timeout.onTick [as _onTimeout] (/opt/xo/xo-builds/xen-orchestra-202511080402/node_modules/undici/lib/util/timers.js:162:13)\n    at listOnTimeout (node:internal/timers:588:17)\n    at process.processTimers (node:internal/timers:523:7)"
                }
              },
              {
                "id": "1764586443440",
                "message": "clean-vm",
                "start": 1764586443440,
                "status": "success",
                "end": 1764586443459,
                "result": {
                  "merge": false
                }
              },
              {
                "id": "1764586443624",
                "message": "snapshot",
                "start": 1764586443624,
                "status": "success",
                "end": 1764586451966,
                "result": "c3e9736e-d6eb-3669-c7b8-f603333a83bf"
              },
              {
                "data": {
                  "id": "f1bc8d14-10dd-4440-bb1d-409b91f3b550",
                  "type": "remote",
                  "isFull": true
                },
                "id": "1764586452003",
                "message": "export",
                "start": 1764586452003,
                "status": "success",
                "tasks": [
                  {
                    "id": "1764586452008",
                    "message": "transfer",
                    "start": 1764586452008,
                    "status": "success",
                    "end": 1764586686887,
                    "result": {
                      "size": 10464489322
                    }
                  }
                ],
                "end": 1764586686900
              },
              {
                "id": "1764586690122",
                "message": "clean-vm",
                "start": 1764586690122,
                "status": "success",
                "end": 1764586690140,
                "result": {
                  "merge": false
                }
              }
            ],
            "warnings": [
              {
                "data": {
                  "attempt": 1,
                  "error": "Body Timeout Error"
                },
                "message": "Retry the VM backup due to an error"
              }
            ],
            "end": 1764586690142
          }
        ],
        "end": 1764586690143
      }
      

      I then enabled your test repository and installed the packages that you mentioned:

      [12:01 dat-xcpng-test01 ~]# rpm -q xapi-core
      xapi-core-25.27.0-2.3.0.xvafix.1.xcpng8.3.x86_64
      [12:08 dat-xcpng-test01 ~]# rpm -q vhd-tool
      vhd-tool-25.27.0-2.3.0.xvafix.1.xcpng8.3.x86_64
      [12:08 dat-xcpng-test01 ~]# rpm -q qcow-stream-tool
      qcow-stream-tool-25.27.0-2.3.0.xvafix.1.xcpng8.3.x86_64
      

      I restarted tool-stack and re-ran the backup job.
      Unfortunately it did not solve the issue and made the backup behave very strangely:
      9c9e9fdc-8385-4df2-9d23-7b0e4ecee0cd-grafik.png
      The backup job ran only a few seconds and reported that it was "successful". But only 10.83KiB were transferred. There are 18GB used space on this VM. So the data unfortunately was not transferred by the backup job.

      25deccb4-295e-4ce1-a015-159780536122-grafik.png

      Here is the backup log:

      {
        "data": {
          "mode": "full",
          "reportWhen": "failure"
        },
        "id": "1764586964999",
        "jobId": "b19ed05e-a34f-4fab-b267-1723a7195f4e",
        "jobName": "Full-Backup-Test",
        "message": "backup",
        "scheduleId": "579d937a-cf57-47b2-8cde-4e8325422b15",
        "start": 1764586964999,
        "status": "success",
        "infos": [
          {
            "data": {
              "vms": [
                "36c492a8-e321-ef2b-94dc-a14e5757d711"
              ]
            },
            "message": "vms"
          }
        ],
        "tasks": [
          {
            "data": {
              "type": "VM",
              "id": "36c492a8-e321-ef2b-94dc-a14e5757d711",
              "name_label": "Win2019_EN_DC_TEST"
            },
            "id": "1764586966983",
            "message": "backup VM",
            "start": 1764586966983,
            "status": "success",
            "tasks": [
              {
                "id": "1764586967194",
                "message": "snapshot",
                "start": 1764586967194,
                "status": "success",
                "end": 1764586975429,
                "result": "ebe5c4e2-5746-9cb3-7df6-701774a679b5"
              },
              {
                "data": {
                  "id": "f1bc8d14-10dd-4440-bb1d-409b91f3b550",
                  "type": "remote",
                  "isFull": true
                },
                "id": "1764586975453",
                "message": "export",
                "start": 1764586975453,
                "status": "success",
                "tasks": [
                  {
                    "id": "1764586975473",
                    "message": "transfer",
                    "start": 1764586975473,
                    "status": "success",
                    "end": 1764586981992,
                    "result": {
                      "size": 11093
                    }
                  }
                ],
                "end": 1764586982054
              },
              {
                "id": "1764586985271",
                "message": "clean-vm",
                "start": 1764586985271,
                "status": "success",
                "end": 1764586985290,
                "result": {
                  "merge": false
                }
              }
            ],
            "end": 1764586985291
          }
        ],
        "end": 1764586985292
      }
      

      If you need me to test something else or if I should provide some log file from the XCP-ng system please let me know.

      Best regards

      posted in Backup
      M
      MajorP93
    • RE: Potential bug with Windows VM backup: "Body Timeout Error"

      @andriy.sultanov I created a small test setup in our lab. I created a WIndows VM with a lot of free disk space (2 virtual disks, 2.5 TB free space in total). Hopefully that way I will be able to replicate the issue with full backup timeout for VMs with a lot of free space that occurred in our production environment.
      The backup job is currently running. I will report back once it failed and once I had a chance to test if your fix solves the issue.

      posted in Backup
      M
      MajorP93