XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Home
    2. Popular
    Log in to post
    • All Time
    • Day
    • Week
    • Month
    • All Topics
    • New Topics
    • Watched Topics
    • Unreplied Topics

    • All categories
    • F

      Unable to update XOA

      Watching Ignoring Scheduled Pinned Locked Moved Management
      10
      0 Votes
      10 Posts
      51 Views
      P
      @fred974 It used to be (and probably still is) that You have to be reasonably near correct time for NTP to accept any changes.
    • ForzaF

      Mirror backup: No new data to upload for this vm?

      Watching Ignoring Scheduled Pinned Locked Moved Backup
      6
      1
      0 Votes
      6 Posts
      81 Views
      Bastien NolletB
      Thanks @Forza, I'll try on my own with a similar configuration.
    • olivierlambertO

      🛰️ XO 6: dedicated thread for all your feedback!

      Watching Ignoring Scheduled Pinned Locked Moved Xen Orchestra
      26
      5 Votes
      26 Posts
      690 Views
      P
      @MajorP93 If You press the link under Storage repository column, it will open in XO5 I think it's by design until XO6 is fully "released" If You press inside of the other 4 columns, it will show info about that repo.
    • M

      log_fs_usage / /var/log directory on pool master filling up constantly

      Watching Ignoring Scheduled Pinned Locked Moved XCP-ng
      2
      1
      0 Votes
      2 Posts
      25 Views
      bvitnikB
      @MajorP93 Amount of logging is directly proportional to the number of hosts, VMs, SRs and clients (Xen Orchestra, XCP-ng Center...). If you have a lot of those, it's rather normal to have huge logs. Now, 5 hosts and 2 SRs does not seem to be much so I wouldn't expect you to have problems with huge logs. There could be something going on there. Try restarting your hosts to clear any stuck processes and internal tasks that could potentially spam the logs. We started having problems with /var/log size when we got in a range of 15+ hosts, 10+ SRs and 1000+ VMs per pool. Unfortunately, log partition cannot be expanded as it is at the end of the disk, followed only by the swap. The workaround we did is to patch the installer to create a large 8GB log partition instead of standard 4GB. Of course, we had to reinstall all of our hosts.
    • ForzaF

      Mirror of full backups with low retention - copies all vms and then deletes them

      Watching Ignoring Scheduled Pinned Locked Moved Backup
      4
      1
      0 Votes
      4 Posts
      78 Views
      Bastien NolletB
      I confirm that this is the current behaviour, as @pilow reported here https://xcp-ng.org/forum/post/99446 We might change it in the future to make it better, but it won't be trivial to change.
    • M

      How to Install XCP-ng Guest Tools on Rocky Linux 10?

      Watching Ignoring Scheduled Pinned Locked Moved Compute
      7
      0 Votes
      7 Posts
      144 Views
      stormiS
      @gduperrey said in How to Install XCP-ng Guest Tools on Rocky Linux 10?: but I don't have a release date yet, even for testing Actually it's already available as xcp-ng-pv-tools in the xcp-ng-incoming repository. What Gaël means is that we haven't run CI on it yet, so we haven't moved the package to the testing repository yet, which is when we usually invite users to test. However here I'm able to say that there's no risk in installing it now for testing, with: yum update xcp-ng-pv-tools --enablerepo=xcp-ng-incoming,xcp-ng-ci,xcp-ng-testing,xcp-ng-candidates (the testing repos will only be enabled for the time of the command, not permanently)
    • H

      Potential bug with Windows VM backup: "Body Timeout Error"

      Watching Ignoring Scheduled Pinned Locked Moved Backup
      39
      3
      2 Votes
      39 Posts
      3k Views
      M
      @andriy.sultanov said in Potential bug with Windows VM backup: "Body Timeout Error": xe-toolstack-restart Okay I was able to replicate the issue. This is the setup that I used and that resulted in the "body timeout error" previously discussed in this thread: OS: Windows Server 2019 Datacenter [image: 1764587172170-1.png] [image: 1764587178380-2.png] The versions of the packages in question that were used in order to replicate the issue (XCP-ng 8.3, fully upgraded): [11:58 dat-xcpng-test01 ~]# rpm -q xapi-core xapi-core-25.27.0-2.2.xcpng8.3.x86_64 [11:59 dat-xcpng-test01 ~]# rpm -q qcow-stream-tool qcow-stream-tool-25.27.0-2.2.xcpng8.3.x86_64 [11:59 dat-xcpng-test01 ~]# rpm -q vhd-tool vhd-tool-25.27.0-2.2.xcpng8.3.x86_64 Result: [image: 1764587232535-3.png] Backup log: { "data": { "mode": "full", "reportWhen": "failure" }, "id": "1764585634255", "jobId": "b19ed05e-a34f-4fab-b267-1723a7195f4e", "jobName": "Full-Backup-Test", "message": "backup", "scheduleId": "579d937a-cf57-47b2-8cde-4e8325422b15", "start": 1764585634255, "status": "failure", "infos": [ { "data": { "vms": [ "36c492a8-e321-ef2b-94dc-a14e5757d711" ] }, "message": "vms" } ], "tasks": [ { "data": { "type": "VM", "id": "36c492a8-e321-ef2b-94dc-a14e5757d711", "name_label": "Win2019_EN_DC_TEST" }, "id": "1764585635692", "message": "backup VM", "start": 1764585635692, "status": "failure", "tasks": [ { "id": "1764585635919", "message": "snapshot", "start": 1764585635919, "status": "success", "end": 1764585644161, "result": "0f548c1f-ce5c-56e3-0259-9c59b7851a17" }, { "data": { "id": "f1bc8d14-10dd-4440-bb1d-409b91f3b550", "type": "remote", "isFull": true }, "id": "1764585644192", "message": "export", "start": 1764585644192, "status": "failure", "tasks": [ { "id": "1764585644201", "message": "transfer", "start": 1764585644201, "status": "failure", "end": 1764586308921, "result": { "name": "BodyTimeoutError", "code": "UND_ERR_BODY_TIMEOUT", "message": "Body Timeout Error", "stack": "BodyTimeoutError: Body Timeout Error\n at FastTimer.onParserTimeout [as _onTimeout] (/opt/xo/xo-builds/xen-orchestra-202511080402/node_modules/undici/lib/dispatcher/client-h1.js:646:28)\n at Timeout.onTick [as _onTimeout] (/opt/xo/xo-builds/xen-orchestra-202511080402/node_modules/undici/lib/util/timers.js:162:13)\n at listOnTimeout (node:internal/timers:588:17)\n at process.processTimers (node:internal/timers:523:7)" } } ], "end": 1764586308922, "result": { "name": "BodyTimeoutError", "code": "UND_ERR_BODY_TIMEOUT", "message": "Body Timeout Error", "stack": "BodyTimeoutError: Body Timeout Error\n at FastTimer.onParserTimeout [as _onTimeout] (/opt/xo/xo-builds/xen-orchestra-202511080402/node_modules/undici/lib/dispatcher/client-h1.js:646:28)\n at Timeout.onTick [as _onTimeout] (/opt/xo/xo-builds/xen-orchestra-202511080402/node_modules/undici/lib/util/timers.js:162:13)\n at listOnTimeout (node:internal/timers:588:17)\n at process.processTimers (node:internal/timers:523:7)" } }, { "id": "1764586443440", "message": "clean-vm", "start": 1764586443440, "status": "success", "end": 1764586443459, "result": { "merge": false } }, { "id": "1764586443624", "message": "snapshot", "start": 1764586443624, "status": "success", "end": 1764586451966, "result": "c3e9736e-d6eb-3669-c7b8-f603333a83bf" }, { "data": { "id": "f1bc8d14-10dd-4440-bb1d-409b91f3b550", "type": "remote", "isFull": true }, "id": "1764586452003", "message": "export", "start": 1764586452003, "status": "success", "tasks": [ { "id": "1764586452008", "message": "transfer", "start": 1764586452008, "status": "success", "end": 1764586686887, "result": { "size": 10464489322 } } ], "end": 1764586686900 }, { "id": "1764586690122", "message": "clean-vm", "start": 1764586690122, "status": "success", "end": 1764586690140, "result": { "merge": false } } ], "warnings": [ { "data": { "attempt": 1, "error": "Body Timeout Error" }, "message": "Retry the VM backup due to an error" } ], "end": 1764586690142 } ], "end": 1764586690143 } I then enabled your test repository and installed the packages that you mentioned: [12:01 dat-xcpng-test01 ~]# rpm -q xapi-core xapi-core-25.27.0-2.3.0.xvafix.1.xcpng8.3.x86_64 [12:08 dat-xcpng-test01 ~]# rpm -q vhd-tool vhd-tool-25.27.0-2.3.0.xvafix.1.xcpng8.3.x86_64 [12:08 dat-xcpng-test01 ~]# rpm -q qcow-stream-tool qcow-stream-tool-25.27.0-2.3.0.xvafix.1.xcpng8.3.x86_64 I restarted tool-stack and re-ran the backup job. Unfortunately it did not solve the issue and made the backup behave very strangely: [image: 1764587340331-9c9e9fdc-8385-4df2-9d23-7b0e4ecee0cd-grafik.png] The backup job ran only a few seconds and reported that it was "successful". But only 10.83KiB were transferred. There are 18GB used space on this VM. So the data unfortunately was not transferred by the backup job. [image: 1764587449301-25deccb4-295e-4ce1-a015-159780536122-grafik.png] Here is the backup log: { "data": { "mode": "full", "reportWhen": "failure" }, "id": "1764586964999", "jobId": "b19ed05e-a34f-4fab-b267-1723a7195f4e", "jobName": "Full-Backup-Test", "message": "backup", "scheduleId": "579d937a-cf57-47b2-8cde-4e8325422b15", "start": 1764586964999, "status": "success", "infos": [ { "data": { "vms": [ "36c492a8-e321-ef2b-94dc-a14e5757d711" ] }, "message": "vms" } ], "tasks": [ { "data": { "type": "VM", "id": "36c492a8-e321-ef2b-94dc-a14e5757d711", "name_label": "Win2019_EN_DC_TEST" }, "id": "1764586966983", "message": "backup VM", "start": 1764586966983, "status": "success", "tasks": [ { "id": "1764586967194", "message": "snapshot", "start": 1764586967194, "status": "success", "end": 1764586975429, "result": "ebe5c4e2-5746-9cb3-7df6-701774a679b5" }, { "data": { "id": "f1bc8d14-10dd-4440-bb1d-409b91f3b550", "type": "remote", "isFull": true }, "id": "1764586975453", "message": "export", "start": 1764586975453, "status": "success", "tasks": [ { "id": "1764586975473", "message": "transfer", "start": 1764586975473, "status": "success", "end": 1764586981992, "result": { "size": 11093 } } ], "end": 1764586982054 }, { "id": "1764586985271", "message": "clean-vm", "start": 1764586985271, "status": "success", "end": 1764586985290, "result": { "merge": false } } ], "end": 1764586985291 } ], "end": 1764586985292 } If you need me to test something else or if I should provide some log file from the XCP-ng system please let me know. Best regards
    • D

      Translations

      Watching Ignoring Scheduled Pinned Locked Moved Non-English speakers
      5
      0 Votes
      5 Posts
      80 Views
      olivierlambertO
      Yes, account aren't related
    • D

      XCP-ng Windows PV tools announcements

      Watching Ignoring Scheduled Pinned Locked Moved News
      43
      0 Votes
      43 Posts
      4k Views
      P
      @dinhngtu said in XCP-ng Windows PV tools announcements: @probain The canonical way is to check the product_id instead https://docs.ansible.com/projects/ansible/latest/collections/ansible/windows/win_package_module.html#parameter-product_id The ProductCode changes every time a new version of XCP-ng Windows PV tools is released, and you can get it from each release's MSI: No problem... If you ever decide to have the .exe-file as a separate item. Not bundled within the zip-file. Then I would be even happier. But until then, thanks for everything!