Categories

  • All news regarding Xen and XCP-ng ecosystem

    142 Topics
    4k Posts
    J
    @gduperrey Seems to be working well on my test systems.
  • Everything related to the virtualization platform

    1k Topics
    15k Posts
    olivierlambertO
    Ping @Team-Hypervisor-Kernel
  • 3k Topics
    27k Posts
    JSylvia007J
    @florent - Here is the JSON. Removing the Snapshots now and trying again with the merge synchronously toggled off. Note the remote is a Synology using NFS, if that matters. { "data": { "mode": "delta", "reportWhen": "failure" }, "id": "1774449668020", "jobId": "7fc5396a-5383-4dab-91fe-6758eb8b7474", "jobName": "ADMIN VMS", "message": "backup", "scheduleId": "d09acecc-cc98-4cfd-84a4-5bfd1575b20f", "start": 1774449668020, "status": "failure", "infos": [ { "data": { "vms": [ "b827a2ad-361d-e44c-19ca-f9d632baacf8", "afe4bee2-745d-da4a-0016-c74751856556" ] }, "message": "vms" } ], "tasks": [ { "data": { "type": "VM", "id": "b827a2ad-361d-e44c-19ca-f9d632baacf8", "name_label": "ADMIN-VM01" }, "id": "1774449670085", "message": "backup VM", "start": 1774449670085, "status": "success", "tasks": [ { "id": "1774449670095", "message": "clean-vm", "start": 1774449670095, "status": "success", "end": 1774449670170, "result": { "merge": false } }, { "id": "1774449670451", "message": "snapshot", "start": 1774449670451, "status": "success", "end": 1774449672123, "result": "dad1585e-4094-88aa-4894-d521fae5cb63" }, { "data": { "id": "9f2e49f9-4e87-444a-aa68-4cbf73f28e6d", "isFull": false, "type": "remote" }, "id": "1774449672123:0", "message": "export", "start": 1774449672123, "status": "success", "tasks": [ { "id": "1774449673924", "message": "transfer", "start": 1774449673924, "status": "success", "end": 1774449690670, "result": { "size": 283115520 } }, { "id": "1774449697186", "message": "clean-vm", "start": 1774449697186, "status": "success", "tasks": [ { "id": "1774449698513", "message": "merge", "start": 1774449698513, "status": "success", "end": 1774449706694 } ], "end": 1774449706704, "result": { "merge": true } } ], "end": 1774449706707 } ], "end": 1774449706707 }, { "data": { "type": "VM", "id": "afe4bee2-745d-da4a-0016-c74751856556", "name_label": "ADMIN-VM02" }, "id": "1774449670088", "message": "backup VM", "start": 1774449670088, "status": "failure", "tasks": [ { "id": "1774449670096", "message": "clean-vm", "start": 1774449670096, "status": "success", "end": 1774449670110, "result": { "merge": false } }, { "id": "1774449670452", "message": "snapshot", "start": 1774449670452, "status": "success", "end": 1774449673024, "result": "77d9de45-e6b7-d202-9245-7db47b6fd9c9" }, { "data": { "id": "9f2e49f9-4e87-444a-aa68-4cbf73f28e6d", "isFull": true, "type": "remote" }, "id": "1774449673024:0", "message": "export", "start": 1774449673024, "status": "failure", "tasks": [ { "id": "1774449674094", "message": "transfer", "start": 1774449674094, "status": "failure", "end": 1774451157435, "result": { "text": "HTTP/1.1 500 Internal Error\r\ncontent-length: 266\r\ncontent-type: text/html\r\nconnection: close\r\ncache-control: no-cache, no-store\r\n\r\n<html><body><h1>HTTP 500 internal server error</h1>An unexpected error occurred; please wait a while and try again. If the problem persists, please contact your support representative.<h1> Additional information </h1>VDI_IO_ERROR: [ Device I/O errors ]</body></html>", "message": "stream has ended with not enough data (actual: 397, expected: 2097152)", "name": "Error", "stack": "Error: stream has ended with not enough data (actual: 397, expected: 2097152)\n at readChunkStrict (/opt/xo/xo-builds/xen-orchestra-202603241416/@vates/read-chunk/index.js:88:19)\n at process.processTicksAndRejections (node:internal/process/task_queues:104:5)\n at async #read (file:///opt/xo/xo-builds/xen-orchestra-202603241416/@xen-orchestra/xapi/disks/XapiVhdStreamSource.mjs:98:65)\n at async generator (file:///opt/xo/xo-builds/xen-orchestra-202603241416/@xen-orchestra/xapi/disks/XapiVhdStreamSource.mjs:199:22)\n at async Timeout.next (file:///opt/xo/xo-builds/xen-orchestra-202603241416/@vates/generator-toolbox/dist/timeout.mjs:14:24)\n at async generatorWithLength (file:///opt/xo/xo-builds/xen-orchestra-202603241416/@xen-orchestra/disk-transform/dist/Throttled.mjs:12:44)\n at async Throttle.createThrottledGenerator (file:///opt/xo/xo-builds/xen-orchestra-202603241416/@vates/generator-toolbox/dist/throttle.mjs:53:30)\n at async ThrottledDisk.diskBlocks (file:///opt/xo/xo-builds/xen-orchestra-202603241416/@xen-orchestra/disk-transform/dist/Disk.mjs:26:30)\n at async Promise.all (index 0)\n at async ForkedDisk.diskBlocks (file:///opt/xo/xo-builds/xen-orchestra-202603241416/@xen-orchestra/disk-transform/dist/SynchronizedDisk.mjs:18:30)" } }, { "id": "1774451158098", "message": "clean-vm", "start": 1774451158098, "status": "success", "end": 1774451158157, "result": { "merge": false } } ], "end": 1774451158216 } ], "end": 1774451158218, "result": { "errno": -2, "code": "ENOENT", "syscall": "stat", "path": "/opt/xo/mounts/9f2e49f9-4e87-444a-aa68-4cbf73f28e6d/xo-vm-backups/afe4bee2-745d-da4a-0016-c74751856556/vdis/7fc5396a-5383-4dab-91fe-6758eb8b7474/530abab7-9ea9-43d4-be6e-acb3fbf67065/20260325T144114Z.alias.vhd", "message": "ENOENT: no such file or directory, stat '/opt/xo/mounts/9f2e49f9-4e87-444a-aa68-4cbf73f28e6d/xo-vm-backups/afe4bee2-745d-da4a-0016-c74751856556/vdis/7fc5396a-5383-4dab-91fe-6758eb8b7474/530abab7-9ea9-43d4-be6e-acb3fbf67065/20260325T144114Z.alias.vhd'", "name": "Error", "stack": "Error: ENOENT: no such file or directory, stat '/opt/xo/mounts/9f2e49f9-4e87-444a-aa68-4cbf73f28e6d/xo-vm-backups/afe4bee2-745d-da4a-0016-c74751856556/vdis/7fc5396a-5383-4dab-91fe-6758eb8b7474/530abab7-9ea9-43d4-be6e-acb3fbf67065/20260325T144114Z.alias.vhd'\nFrom:\n at NfsHandler.addSyncStackTrace (/opt/xo/xo-builds/xen-orchestra-202603241416/@xen-orchestra/fs/dist/local.js:21:26)\n at NfsHandler._getSize (/opt/xo/xo-builds/xen-orchestra-202603241416/@xen-orchestra/fs/dist/local.js:113:48)\n at /opt/xo/xo-builds/xen-orchestra-202603241416/@xen-orchestra/fs/dist/utils.js:29:26\n at new Promise (<anonymous>)\n at NfsHandler.<anonymous> (/opt/xo/xo-builds/xen-orchestra-202603241416/@xen-orchestra/fs/dist/utils.js:24:12)\n at loopResolver (/opt/xo/xo-builds/xen-orchestra-202603241416/node_modules/promise-toolbox/retry.js:83:46)\n at new Promise (<anonymous>)\n at loop (/opt/xo/xo-builds/xen-orchestra-202603241416/node_modules/promise-toolbox/retry.js:85:22)\n at NfsHandler.retry (/opt/xo/xo-builds/xen-orchestra-202603241416/node_modules/promise-toolbox/retry.js:87:10)\n at NfsHandler._getSize (/opt/xo/xo-builds/xen-orchestra-202603241416/node_modules/promise-toolbox/retry.js:103:18)" } } ], "end": 1774451158219 }
  • Our hyperconverged storage solution

    43 Topics
    729 Posts
    SuperDuckGuyS
    @alcoralcor Thanks for the info. I thought maybe I was using too many disks, so I've tried creating disk groups of 3-4 drives with the same issue.
  • 33 Topics
    98 Posts
    J
    @yann Ce n'est pas de la PROD. Si ma compréhension est bonne, après recherche, le stockage vers la LUN NetAPP (iSCSI) a été perdu, la mode HA étant actif (les volumes HA n'était plus accessible). [10:05 xcp-ng-poc-1 ~]# xe vm-list The server could not join the liveset because the HA daemon could not access the heartbeat disk. [10:06 xcp-ng-poc-1 ~]# xe host-emergency-ha-disable Error: This operation is dangerous and may cause data loss. This operation must be forced (use --force). [10:06 xcp-ng-poc-1 ~]# xe host-emergency-ha-disable --force [10:06 xcp-ng-poc-1 ~]# xe-toolstack-restart Executing xe-toolstack-restart done. [10:07 xcp-ng-poc-1 ~]# Côté stockage [10:09 xcp-ng-poc-1 ~]# xe pbd-list sr-uuid=16ec6b11-6110-7a27-4d94-dfcc09f34d15 uuid ( RO) : be5ac5cc-bc70-4eef-8b01-a9ed98f83e23 host-uuid ( RO): 0219cb2e-46b8-4657-bfa4-c924b59e373a sr-uuid ( RO): 16ec6b11-6110-7a27-4d94-dfcc09f34d15 device-config (MRO): SCSIid: 3600a098038323566622b5a5977776557; targetIQN: iqn.1992-08.com.netapp:sn.89de0ec3fba011f0be0bd039eae42297:vs.8; targetport: 3260; target: 172.17.10.1; multihomelist: 172.17.10.1:3260,172.17.11.1:3260,172.17.1.1:3260,172.17.0.1:3260 currently-attached ( RO): false uuid ( RO) : a2dd4324-ce32-5a5e-768f-cc0df10dc49a host-uuid ( RO): cb9a2dc3-cc1d-4467-99eb-6896503b4e11 sr-uuid ( RO): 16ec6b11-6110-7a27-4d94-dfcc09f34d15 device-config (MRO): multiSession: 172.17.1.1,3260,iqn.1992-08.com.netapp:sn.89de0ec3fba011f0be0bd039eae42297:vs.8|172.17.0.1,3260,iqn.1992-08.com.netapp:sn.89de0ec3fba011f0be0bd039eae42297:vs.8|; target: 172.17.0.1; targetIQN: *; SCSIid: 3600a098038323566622b5a5977776557; multihomelist: 172.17.1.1:3260,172.17.10.1:3260,172.17.11.1:3260,172.17.0.1:3260 currently-attached ( RO): false [10:09 xcp-ng-poc-1 ~]# xe pbd-plug uuid=be5ac5cc-bc70-4eef-8b01-a9ed98f83e23 Error code: SR_BACKEND_FAILURE_47 Error parameters: , The SR is not available [opterr=no such volume group: VG_XenStorage-16ec6b11-6110-7a27-4d94-dfcc09f34d15], [10:10 xcp-ng-poc-1 ~]# Après ca, XO-Lite s'est correctement relancé.