Categories

  • All news regarding Xen and XCP-ng ecosystem

    143 Topics
    4k Posts
    A
    @rzr (edit) After upgrading two main pools, I'm having CR delta backup issues. Everything was working before the XCP update, now every VM has the same error of Backup fell back to a full. Using XO master db9c4, but the same XO setup was working just fine before the XCP update. (edit 2) XO logs Apr 15 22:55:40 xo1 xo-server[1409]: 2026-04-16T02:55:40.613Z xo:backups:worker INFO starting backup Apr 15 22:55:42 xo1 xo-server[1409]: 2026-04-16T02:55:42.006Z xo:xapi:xapi-disks INFO export through vhd Apr 15 22:55:44 xo1 xo-server[1409]: 2026-04-16T02:55:44.093Z xo:xapi:vdi INFO OpaqueRef:07ac67ab-05cf-a066-5924-f28e15642d4e was already destroyed { Apr 15 22:55:44 xo1 xo-server[1409]: vdiRef: 'OpaqueRef:49ff18d3-5c18-176c-4930-0163c6727c2b', Apr 15 22:55:44 xo1 xo-server[1409]: vbdRef: 'OpaqueRef:07ac67ab-05cf-a066-5924-f28e15642d4e' Apr 15 22:55:44 xo1 xo-server[1409]: } Apr 15 22:55:44 xo1 xo-server[1409]: 2026-04-16T02:55:44.839Z xo:xapi:vdi INFO OpaqueRef:e5fa3d00-f629-6983-6ff2-841e9edacf82 has been disconnected from dom0 { Apr 15 22:55:44 xo1 xo-server[1409]: vdiRef: 'OpaqueRef:02f9ba92-1ee2-88eb-f660-a2cf3eeb287d', Apr 15 22:55:44 xo1 xo-server[1409]: vbdRef: 'OpaqueRef:e5fa3d00-f629-6983-6ff2-841e9edacf82' Apr 15 22:55:44 xo1 xo-server[1409]: } Apr 15 22:55:44 xo1 xo-server[1409]: 2026-04-16T02:55:44.910Z xo:xapi:vm WARN _assertHealthyVdiChain, could not fetch VDI { Apr 15 22:55:44 xo1 xo-server[1409]: error: XapiError: UUID_INVALID(VDI, 8f233bfc-9deb-4a06-aa07-0510de7496a1) Apr 15 22:55:44 xo1 xo-server[1409]: at XapiError.wrap (file:///opt/xo/xo-builds/xen-orchestra-202604151415/packages/xen-api/_XapiError.mjs:16:12) Apr 15 22:55:44 xo1 xo-server[1409]: at file:///opt/xo/xo-builds/xen-orchestra-202604151415/packages/xen-api/transports/json-rpc.mjs:38:21 Apr 15 22:55:44 xo1 xo-server[1409]: at process.processTicksAndRejections (node:internal/process/task_queues:104:5) { Apr 15 22:55:44 xo1 xo-server[1409]: code: 'UUID_INVALID', Apr 15 22:55:44 xo1 xo-server[1409]: params: [ 'VDI', '8f233bfc-9deb-4a06-aa07-0510de7496a1' ], Apr 15 22:55:44 xo1 xo-server[1409]: call: { duration: 3, method: 'VDI.get_by_uuid', params: [Array] }, Apr 15 22:55:44 xo1 xo-server[1409]: url: undefined, Apr 15 22:55:44 xo1 xo-server[1409]: task: undefined Apr 15 22:55:44 xo1 xo-server[1409]: } Apr 15 22:55:44 xo1 xo-server[1409]: } Apr 15 22:55:46 xo1 xo-server[1409]: 2026-04-16T02:55:46.732Z xo:xapi:xapi-disks INFO Error in openNbdCBT XapiError: SR_BACKEND_FAILURE_460(, Failed to calculate changed blocks for given VDIs. [opterr=Source and target VDI are unrelated], ) Apr 15 22:55:46 xo1 xo-server[1409]: at XapiError.wrap (file:///opt/xo/xo-builds/xen-orchestra-202604151415/packages/xen-api/_XapiError.mjs:16:12) Apr 15 22:55:46 xo1 xo-server[1409]: at default (file:///opt/xo/xo-builds/xen-orchestra-202604151415/packages/xen-api/_getTaskResult.mjs:13:29) Apr 15 22:55:46 xo1 xo-server[1409]: at Xapi._addRecordToCache (file:///opt/xo/xo-builds/xen-orchestra-202604151415/packages/xen-api/index.mjs:1078:24) Apr 15 22:55:46 xo1 xo-server[1409]: at file:///opt/xo/xo-builds/xen-orchestra-202604151415/packages/xen-api/index.mjs:1112:14 Apr 15 22:55:46 xo1 xo-server[1409]: at Array.forEach (<anonymous>) Apr 15 22:55:46 xo1 xo-server[1409]: at Xapi._processEvents (file:///opt/xo/xo-builds/xen-orchestra-202604151415/packages/xen-api/index.mjs:1102:12) Apr 15 22:55:46 xo1 xo-server[1409]: at Xapi._watchEvents (file:///opt/xo/xo-builds/xen-orchestra-202604151415/packages/xen-api/index.mjs:1275:14) Apr 15 22:55:46 xo1 xo-server[1409]: at process.processTicksAndRejections (node:internal/process/task_queues:104:5) { Apr 15 22:55:46 xo1 xo-server[1409]: code: 'SR_BACKEND_FAILURE_460', Apr 15 22:55:46 xo1 xo-server[1409]: params: [ Apr 15 22:55:46 xo1 xo-server[1409]: '', Apr 15 22:55:46 xo1 xo-server[1409]: 'Failed to calculate changed blocks for given VDIs. [opterr=Source and target VDI are unrelated]', Apr 15 22:55:46 xo1 xo-server[1409]: '' Apr 15 22:55:46 xo1 xo-server[1409]: ], Apr 15 22:55:46 xo1 xo-server[1409]: call: undefined, Apr 15 22:55:46 xo1 xo-server[1409]: url: undefined, Apr 15 22:55:46 xo1 xo-server[1409]: task: task { Apr 15 22:55:46 xo1 xo-server[1409]: uuid: '8fae41b4-de82-789c-980a-5ff2d490d2d8', Apr 15 22:55:46 xo1 xo-server[1409]: name_label: 'Async.VDI.list_changed_blocks', Apr 15 22:55:46 xo1 xo-server[1409]: name_description: '', Apr 15 22:55:46 xo1 xo-server[1409]: allowed_operations: [], Apr 15 22:55:46 xo1 xo-server[1409]: current_operations: {}, Apr 15 22:55:46 xo1 xo-server[1409]: created: '20260416T02:55:46Z', Apr 15 22:55:46 xo1 xo-server[1409]: finished: '20260416T02:55:46Z', Apr 15 22:55:46 xo1 xo-server[1409]: status: 'failure', Apr 15 22:55:46 xo1 xo-server[1409]: resident_on: 'OpaqueRef:7b987b11-ada0-99ce-d831-6e589bf34b50', Apr 15 22:55:46 xo1 xo-server[1409]: progress: 1, Apr 15 22:55:46 xo1 xo-server[1409]: type: '<none/>', Apr 15 22:55:46 xo1 xo-server[1409]: result: '', Apr 15 22:55:46 xo1 xo-server[1409]: error_info: [ Apr 15 22:55:46 xo1 xo-server[1409]: 'SR_BACKEND_FAILURE_460', Apr 15 22:55:46 xo1 xo-server[1409]: '', Apr 15 22:55:46 xo1 xo-server[1409]: 'Failed to calculate changed blocks for given VDIs. [opterr=Source and target VDI are unrelated]', Apr 15 22:55:46 xo1 xo-server[1409]: '' Apr 15 22:55:46 xo1 xo-server[1409]: ], Apr 15 22:55:46 xo1 xo-server[1409]: other_config: {}, Apr 15 22:55:46 xo1 xo-server[1409]: subtask_of: 'OpaqueRef:NULL', Apr 15 22:55:46 xo1 xo-server[1409]: subtasks: [], Apr 15 22:55:46 xo1 xo-server[1409]: backtrace: '(((process xapi)(filename lib/backtrace.ml)(line 210))((process xapi)(filename ocaml/xapi/storage_utils.ml)(line 150))((process xapi)(filename ocaml/xapi/message_forwarding.ml)(line 141))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 24))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 39))((process xapi)(filename ocaml/xapi/rbac.ml)(line 228))((process xapi)(filename ocaml/xapi/rbac.ml)(line 238))((process xapi)(filename ocaml/xapi/server_helpers.ml)(line 78)))' Apr 15 22:55:46 xo1 xo-server[1409]: } Apr 15 22:55:46 xo1 xo-server[1409]: } Apr 15 22:55:46 xo1 xo-server[1409]: 2026-04-16T02:55:46.735Z xo:xapi:xapi-disks INFO export through vhd Apr 15 22:55:48 xo1 xo-server[1409]: 2026-04-16T02:55:48.115Z xo:xapi:vdi WARN invalid HTTP header in response body { Apr 15 22:55:48 xo1 xo-server[1409]: body: 'HTTP/1.1 500 Internal Error\r\n' + Apr 15 22:55:48 xo1 xo-server[1409]: 'content-length: 318\r\n' + Apr 15 22:55:48 xo1 xo-server[1409]: 'content-type: text/html\r\n' + Apr 15 22:55:48 xo1 xo-server[1409]: 'connection: close\r\n' + Apr 15 22:55:48 xo1 xo-server[1409]: 'cache-control: no-cache, no-store\r\n' + Apr 15 22:55:48 xo1 xo-server[1409]: '\r\n' + Apr 15 22:55:48 xo1 xo-server[1409]: '<html><body><h1>HTTP 500 internal server error</h1>An unexpected error occurred; please wait a while and try again. If the problem persists, please contact your support representative.<h1> Additional information </h1>VDI_INCOMPATIBLE_TYPE: [ OpaqueRef:3b37047e-11dd-f836-ebed-acfaff2072ac; CBT metadata ]</body></html>' Apr 15 22:55:48 xo1 xo-server[1409]: } Apr 15 22:55:48 xo1 xo-server[1409]: 2026-04-16T02:55:48.124Z xo:xapi:xapi-disks WARN can't compute delta OpaqueRef:e7de1446-34fd-1ae8-4680-351b1e72b2dd from OpaqueRef:3b37047e-11dd-f836-ebed-acfaff2072ac, fallBack to a full { Apr 15 22:55:48 xo1 xo-server[1409]: error: Error: invalid HTTP header in response body Apr 15 22:55:48 xo1 xo-server[1409]: at checkVdiExport (file:///opt/xo/xo-builds/xen-orchestra-202604151415/@xen-orchestra/xapi/vdi.mjs:37:19) Apr 15 22:55:48 xo1 xo-server[1409]: at process.processTicksAndRejections (node:internal/process/task_queues:104:5) Apr 15 22:55:48 xo1 xo-server[1409]: at async Xapi.exportContent (file:///opt/xo/xo-builds/xen-orchestra-202604151415/@xen-orchestra/xapi/vdi.mjs:261:5) Apr 15 22:55:48 xo1 xo-server[1409]: at async #getExportStream (file:///opt/xo/xo-builds/xen-orchestra-202604151415/@xen-orchestra/xapi/disks/XapiVhdStreamSource.mjs:123:20) Apr 15 22:55:48 xo1 xo-server[1409]: at async XapiVhdStreamSource.init (file:///opt/xo/xo-builds/xen-orchestra-202604151415/@xen-orchestra/xapi/disks/XapiVhdStreamSource.mjs:135:23) Apr 15 22:55:48 xo1 xo-server[1409]: at async #openExportStream (file:///opt/xo/xo-builds/xen-orchestra-202604151415/@xen-orchestra/xapi/disks/Xapi.mjs:182:7) Apr 15 22:55:48 xo1 xo-server[1409]: at async #openNbdStream (file:///opt/xo/xo-builds/xen-orchestra-202604151415/@xen-orchestra/xapi/disks/Xapi.mjs:97:22) Apr 15 22:55:48 xo1 xo-server[1409]: at async XapiDiskSource.openSource (file:///opt/xo/xo-builds/xen-orchestra-202604151415/@xen-orchestra/xapi/disks/Xapi.mjs:258:18) Apr 15 22:55:48 xo1 xo-server[1409]: at async XapiDiskSource.init (file:///opt/xo/xo-builds/xen-orchestra-202604151415/@xen-orchestra/disk-transform/dist/DiskPassthrough.mjs:28:41) Apr 15 22:55:48 xo1 xo-server[1409]: at async file:///opt/xo/xo-builds/xen-orchestra-202604151415/@xen-orchestra/backups/_incrementalVm.mjs:66:5 Apr 15 22:55:48 xo1 xo-server[1409]: } Apr 15 22:55:48 xo1 xo-server[1409]: 2026-04-16T02:55:48.126Z xo:xapi:xapi-disks INFO export through vhd Apr 15 22:56:24 xo1 xo-server[1409]: 2026-04-16T02:56:24.047Z xo:backups:worker INFO backup has ended Apr 15 22:56:24 xo1 xo-server[1409]: 2026-04-16T02:56:24.231Z xo:backups:worker INFO process will exit { Apr 15 22:56:24 xo1 xo-server[1409]: duration: 43618102, Apr 15 22:56:24 xo1 xo-server[1409]: exitCode: 0, Apr 15 22:56:24 xo1 xo-server[1409]: resourceUsage: { Apr 15 22:56:24 xo1 xo-server[1409]: userCPUTime: 45307253, Apr 15 22:56:24 xo1 xo-server[1409]: systemCPUTime: 6674413, Apr 15 22:56:24 xo1 xo-server[1409]: maxRSS: 30928, Apr 15 22:56:24 xo1 xo-server[1409]: sharedMemorySize: 0, Apr 15 22:56:24 xo1 xo-server[1409]: unsharedDataSize: 0, Apr 15 22:56:24 xo1 xo-server[1409]: unsharedStackSize: 0, Apr 15 22:56:24 xo1 xo-server[1409]: minorPageFault: 287968, Apr 15 22:56:24 xo1 xo-server[1409]: majorPageFault: 0, Apr 15 22:56:24 xo1 xo-server[1409]: swappedOut: 0, Apr 15 22:56:24 xo1 xo-server[1409]: fsRead: 0, Apr 15 22:56:24 xo1 xo-server[1409]: fsWrite: 0, Apr 15 22:56:24 xo1 xo-server[1409]: ipcSent: 0, Apr 15 22:56:24 xo1 xo-server[1409]: ipcReceived: 0, Apr 15 22:56:24 xo1 xo-server[1409]: signalsCount: 0, Apr 15 22:56:24 xo1 xo-server[1409]: voluntaryContextSwitches: 14665, Apr 15 22:56:24 xo1 xo-server[1409]: involuntaryContextSwitches: 962 Apr 15 22:56:24 xo1 xo-server[1409]: }, Apr 15 22:56:24 xo1 xo-server[1409]: summary: { duration: '44s', cpuUsage: '119%', memoryUsage: '30.2 MiB' } Apr 15 22:56:24 xo1 xo-server[1409]: }
  • Everything related to the virtualization platform

    1k Topics
    15k Posts
    V
    @poddingue thank you that was it ! I had the feeling that the issue was around the path with the 4 slashes but couldn't figure out why, what and where. So essentially, after setting the working directory to /tmp for my docker run it worked. Here is the extract of the working build step for install.img - name: Build install.img run: | XCPNG_VER="${{ github.event.inputs.xcpng_version }}" docker run --rm \ --user root -w /tmp \ -v "$(pwd)/create-install-image:/create-install-image:ro" \ -v "/tmp/RPM-GPG-KEY-xcp-ng-ce:/etc/pki/rpm-gpg/RPM-GPG-KEY-xcp-ng-ce" \ -v "$(pwd):/output" \ xcp-ng-build-ready \ bash -ce " rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-xcpng rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-xcp-ng-ce /create-install-image/scripts/create-installimg.sh \ --output /output/install-${XCPNG_VER}.img \ --define-repo base!https://updates.xcp-ng.org/8/${XCPNG_VER}/base \ --define-repo updates!https://updates.xcp-ng.org/8/${XCPNG_VER}/updates \ ${XCPNG_VER} echo 'install.img built' Regarding the output you wanted to see, here is it when it fails, first the way I trigger the container for context. sudo docker run --rm -it -v "$(pwd)/create-install-image:/create-install-image:ro" -v "$(pwd):/output" b292e8a21068 /bin/bash ./create-install-image/scripts/create-installimg.sh --output /output/instal.img 8.3 -----Set REPOS----- --- PWD var and TMPDIR content---- / total 20 drwx------ 4 root root 4096 Apr 16 00:54 . drwxr-xr-x 1 root root 4096 Apr 16 00:54 .. drwx------ 2 root root 4096 Apr 16 00:54 rootfs-FJWbFM -rw------- 1 root root 295 Apr 16 00:54 yum-HRyIb1.conf drwx------ 2 root root 4096 Apr 16 00:54 yum-repos-1FbWwV.d --- ISSUE happens here *setup_yum_repos* ---- CRITICAL:yum.cli:Config error: Error accessing file for config file:////tmpdir-sApL80/yum-HRyIb1.conf As soon as I'm moving to different directory other than the root / then this issue goes away. Now going through the ISO build. With kind regards.
  • 3k Topics
    28k Posts
    florentF
    that's better but it still grew during the night [image: 1776317144786-50dbd90c-5235-4b0d-be4c-279d8281d3b2-image.jpeg] (and the process did an aout of memory when I asked for a memory dump) I reduced the memory allocated to the rest api and will continue monitoring
  • Our hyperconverged storage solution

    44 Topics
    731 Posts
    olivierlambertO
    Different use cases: Ceph is better with more hosts (at least 6 or 7 minimum) while XOSTOR is better between 3 to 7/8. We might have better Ceph support in the future for large clusters.
  • 34 Topics
    102 Posts
    B
    La remarque a été intégrée dans l'article: https://www.myprivatelab.tech/xcp_lab_v2_ha#perte-master Merci encore pour le retour.