XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Home
    2. Popular
    Log in to post
    • All Time
    • Day
    • Week
    • Month
    • All Topics
    • New Topics
    • Watched Topics
    • Unreplied Topics

    • All categories
    • F

      Unable to update XOA

      Watching Ignoring Scheduled Pinned Locked Moved Management
      9
      0 Votes
      9 Posts
      23 Views
      P
      @fred974 you should have an ntp server configuration on both, XOA and XCP host perhaps daemons are not started ?
    • olivierlambertO

      🛰️ XO 6: dedicated thread for all your feedback!

      Watching Ignoring Scheduled Pinned Locked Moved Xen Orchestra
      24
      5 Votes
      24 Posts
      625 Views
      pdoniasP
      @ph7 Thanks for all the feedback! We took notes of everything, and we're already fixing some of them @jr-m4, @probain and @acebmxer We were eventually able to reproduce the VDI name bug, we'll fix that! @Davidj-0 said in ️ XO 6: dedicated thread for all your feedback!: Should we add this to https://docs.xen-orchestra.com/installation#installing-dependencies ? Or will it not be necessary after the next release? It won't be necessary after we merge this PR, which should be very soon.
    • ForzaF

      Mirror backup: No new data to upload for this vm?

      Watching Ignoring Scheduled Pinned Locked Moved Backup
      5
      1
      0 Votes
      5 Posts
      62 Views
      ForzaF
      @Bastien-Nollet Here is the incremental backup config. Originally we only had the remote called srv04-incremental. I have now added srv12-incremental and wanted to copy over all existing backups to the new remote. I did the same with full backups too. Now I have each backup job using both remotes. [image: 1764599046499-200dc1a4-d808-49b8-ac44-201b2226e215-image.png] [image: 1764599160271-f46f5079-c10d-46e7-96dd-fd3a9fc76924-image.png]
    • M

      How to Install XCP-ng Guest Tools on Rocky Linux 10?

      Watching Ignoring Scheduled Pinned Locked Moved Compute
      7
      0 Votes
      7 Posts
      134 Views
      stormiS
      @gduperrey said in How to Install XCP-ng Guest Tools on Rocky Linux 10?: but I don't have a release date yet, even for testing Actually it's already available as xcp-ng-pv-tools in the xcp-ng-incoming repository. What Gaël means is that we haven't run CI on it yet, so we haven't moved the package to the testing repository yet, which is when we usually invite users to test. However here I'm able to say that there's no risk in installing it now for testing, with: yum update xcp-ng-pv-tools --enablerepo=xcp-ng-incoming,xcp-ng-ci,xcp-ng-testing,xcp-ng-candidates (the testing repos will only be enabled for the time of the command, not permanently)
    • H

      Potential bug with Windows VM backup: "Body Timeout Error"

      Watching Ignoring Scheduled Pinned Locked Moved Backup
      39
      3
      2 Votes
      39 Posts
      3k Views
      M
      @andriy.sultanov said in Potential bug with Windows VM backup: "Body Timeout Error": xe-toolstack-restart Okay I was able to replicate the issue. This is the setup that I used and that resulted in the "body timeout error" previously discussed in this thread: OS: Windows Server 2019 Datacenter [image: 1764587172170-1.png] [image: 1764587178380-2.png] The versions of the packages in question that were used in order to replicate the issue (XCP-ng 8.3, fully upgraded): [11:58 dat-xcpng-test01 ~]# rpm -q xapi-core xapi-core-25.27.0-2.2.xcpng8.3.x86_64 [11:59 dat-xcpng-test01 ~]# rpm -q qcow-stream-tool qcow-stream-tool-25.27.0-2.2.xcpng8.3.x86_64 [11:59 dat-xcpng-test01 ~]# rpm -q vhd-tool vhd-tool-25.27.0-2.2.xcpng8.3.x86_64 Result: [image: 1764587232535-3.png] Backup log: { "data": { "mode": "full", "reportWhen": "failure" }, "id": "1764585634255", "jobId": "b19ed05e-a34f-4fab-b267-1723a7195f4e", "jobName": "Full-Backup-Test", "message": "backup", "scheduleId": "579d937a-cf57-47b2-8cde-4e8325422b15", "start": 1764585634255, "status": "failure", "infos": [ { "data": { "vms": [ "36c492a8-e321-ef2b-94dc-a14e5757d711" ] }, "message": "vms" } ], "tasks": [ { "data": { "type": "VM", "id": "36c492a8-e321-ef2b-94dc-a14e5757d711", "name_label": "Win2019_EN_DC_TEST" }, "id": "1764585635692", "message": "backup VM", "start": 1764585635692, "status": "failure", "tasks": [ { "id": "1764585635919", "message": "snapshot", "start": 1764585635919, "status": "success", "end": 1764585644161, "result": "0f548c1f-ce5c-56e3-0259-9c59b7851a17" }, { "data": { "id": "f1bc8d14-10dd-4440-bb1d-409b91f3b550", "type": "remote", "isFull": true }, "id": "1764585644192", "message": "export", "start": 1764585644192, "status": "failure", "tasks": [ { "id": "1764585644201", "message": "transfer", "start": 1764585644201, "status": "failure", "end": 1764586308921, "result": { "name": "BodyTimeoutError", "code": "UND_ERR_BODY_TIMEOUT", "message": "Body Timeout Error", "stack": "BodyTimeoutError: Body Timeout Error\n at FastTimer.onParserTimeout [as _onTimeout] (/opt/xo/xo-builds/xen-orchestra-202511080402/node_modules/undici/lib/dispatcher/client-h1.js:646:28)\n at Timeout.onTick [as _onTimeout] (/opt/xo/xo-builds/xen-orchestra-202511080402/node_modules/undici/lib/util/timers.js:162:13)\n at listOnTimeout (node:internal/timers:588:17)\n at process.processTimers (node:internal/timers:523:7)" } } ], "end": 1764586308922, "result": { "name": "BodyTimeoutError", "code": "UND_ERR_BODY_TIMEOUT", "message": "Body Timeout Error", "stack": "BodyTimeoutError: Body Timeout Error\n at FastTimer.onParserTimeout [as _onTimeout] (/opt/xo/xo-builds/xen-orchestra-202511080402/node_modules/undici/lib/dispatcher/client-h1.js:646:28)\n at Timeout.onTick [as _onTimeout] (/opt/xo/xo-builds/xen-orchestra-202511080402/node_modules/undici/lib/util/timers.js:162:13)\n at listOnTimeout (node:internal/timers:588:17)\n at process.processTimers (node:internal/timers:523:7)" } }, { "id": "1764586443440", "message": "clean-vm", "start": 1764586443440, "status": "success", "end": 1764586443459, "result": { "merge": false } }, { "id": "1764586443624", "message": "snapshot", "start": 1764586443624, "status": "success", "end": 1764586451966, "result": "c3e9736e-d6eb-3669-c7b8-f603333a83bf" }, { "data": { "id": "f1bc8d14-10dd-4440-bb1d-409b91f3b550", "type": "remote", "isFull": true }, "id": "1764586452003", "message": "export", "start": 1764586452003, "status": "success", "tasks": [ { "id": "1764586452008", "message": "transfer", "start": 1764586452008, "status": "success", "end": 1764586686887, "result": { "size": 10464489322 } } ], "end": 1764586686900 }, { "id": "1764586690122", "message": "clean-vm", "start": 1764586690122, "status": "success", "end": 1764586690140, "result": { "merge": false } } ], "warnings": [ { "data": { "attempt": 1, "error": "Body Timeout Error" }, "message": "Retry the VM backup due to an error" } ], "end": 1764586690142 } ], "end": 1764586690143 } I then enabled your test repository and installed the packages that you mentioned: [12:01 dat-xcpng-test01 ~]# rpm -q xapi-core xapi-core-25.27.0-2.3.0.xvafix.1.xcpng8.3.x86_64 [12:08 dat-xcpng-test01 ~]# rpm -q vhd-tool vhd-tool-25.27.0-2.3.0.xvafix.1.xcpng8.3.x86_64 [12:08 dat-xcpng-test01 ~]# rpm -q qcow-stream-tool qcow-stream-tool-25.27.0-2.3.0.xvafix.1.xcpng8.3.x86_64 I restarted tool-stack and re-ran the backup job. Unfortunately it did not solve the issue and made the backup behave very strangely: [image: 1764587340331-9c9e9fdc-8385-4df2-9d23-7b0e4ecee0cd-grafik.png] The backup job ran only a few seconds and reported that it was "successful". But only 10.83KiB were transferred. There are 18GB used space on this VM. So the data unfortunately was not transferred by the backup job. [image: 1764587449301-25deccb4-295e-4ce1-a015-159780536122-grafik.png] Here is the backup log: { "data": { "mode": "full", "reportWhen": "failure" }, "id": "1764586964999", "jobId": "b19ed05e-a34f-4fab-b267-1723a7195f4e", "jobName": "Full-Backup-Test", "message": "backup", "scheduleId": "579d937a-cf57-47b2-8cde-4e8325422b15", "start": 1764586964999, "status": "success", "infos": [ { "data": { "vms": [ "36c492a8-e321-ef2b-94dc-a14e5757d711" ] }, "message": "vms" } ], "tasks": [ { "data": { "type": "VM", "id": "36c492a8-e321-ef2b-94dc-a14e5757d711", "name_label": "Win2019_EN_DC_TEST" }, "id": "1764586966983", "message": "backup VM", "start": 1764586966983, "status": "success", "tasks": [ { "id": "1764586967194", "message": "snapshot", "start": 1764586967194, "status": "success", "end": 1764586975429, "result": "ebe5c4e2-5746-9cb3-7df6-701774a679b5" }, { "data": { "id": "f1bc8d14-10dd-4440-bb1d-409b91f3b550", "type": "remote", "isFull": true }, "id": "1764586975453", "message": "export", "start": 1764586975453, "status": "success", "tasks": [ { "id": "1764586975473", "message": "transfer", "start": 1764586975473, "status": "success", "end": 1764586981992, "result": { "size": 11093 } } ], "end": 1764586982054 }, { "id": "1764586985271", "message": "clean-vm", "start": 1764586985271, "status": "success", "end": 1764586985290, "result": { "merge": false } } ], "end": 1764586985291 } ], "end": 1764586985292 } If you need me to test something else or if I should provide some log file from the XCP-ng system please let me know. Best regards
    • D

      Translations

      Watching Ignoring Scheduled Pinned Locked Moved Non-English speakers
      5
      0 Votes
      5 Posts
      74 Views
      olivierlambertO
      Yes, account aren't related
    • ForzaF

      Mirror of full backups with low retention - copies all vms and then deletes them

      Watching Ignoring Scheduled Pinned Locked Moved Backup
      3
      1
      0 Votes
      3 Posts
      48 Views
      ForzaF
      It looks like it transfers one backup, deletes it, starts the next backup, deletes it, starts the next one... and so on. This seems rather inefficient for full backups. I can understand it has to transfer the full chain when dealing with incremental backups, even if it has to prune and merge them afterwards. I also notice that even though I set the retention to 1000 in the full mirror job, not all backups are copied: [image: 1764584206009-c6f6fd61-d80c-4e3f-9071-4370003df9b4-image.png] [image: 1764584286342-762fb821-06c7-400d-92d1-d277d44ff2dd-image.png] { "data": { "type": "VM", "id": "0ecd9bc3-b4e8-8f0e-e50d-6b94420ea742" }, "id": "1764584306124", "message": "backup VM", "start": 1764584306124, "status": "success", "infos": [ { "message": "No new data to upload for this VM" }, { "message": "No healthCheck needed because no data was transferred." } ], "tasks": [ { "id": "1764584306137:1", "message": "clean-vm", "start": 1764584306137, "status": "success", "end": 1764584306150, "result": { "merge": false } } ], "end": 1764584306151 },
    • AtaxyaNetworkA

      Wazuh OVA appliance : how to make it work !

      Watching Ignoring Scheduled Pinned Locked Moved Compute
      4
      2
      6 Votes
      4 Posts
      683 Views
      X
      Thanks a lot for this In few months ago I have found your topic to try wazuh, and it is working good. After some error on my home lab, I need to install all again and remember your topic. On my installation, I just need to set /dev/xvda1 (if I just put /dev/xvda, VM not start). My home lab is on ProLiant DL360 Gen9.