@Danp I got around this by using Clonezilla to copy the disks of the 2 VMs I was having an issue with. This is very strange. The VMs booted and ran fine on the old host, I was able to clone them with Clonezilla to the new host where they also boot and run fine. There is something preventing the native tools from migrating these specific disks though.
Posts
-
RE: Migration fails: VDI_COPY_FAILED(End_of_file)
-
RE: Migration fails: VDI_COPY_FAILED(End_of_file)
@Danp any clues from the logs I posted?
-
RE: Migration fails: VDI_COPY_FAILED(End_of_file)
vm.copy { "vm": "7a6434bc-8dce-40e0-879f-a739a072f99a", "sr": "36a86edc-f16e-abad-b8a4-6f7f0f60aad2", "name": "DNS0_COPY" } { "code": "VDI_COPY_FAILED", "params": [ "End_of_file" ], "task": { "uuid": "2a25d3bc-b820-0971-d59a-b415dda0dc45", "name_label": "Async.VM.copy", "name_description": "", "allowed_operations": [], "current_operations": {}, "created": "20250110T18:45:52Z", "finished": "20250110T18:46:06Z", "status": "failure", "resident_on": "OpaqueRef:50fd6973-7a56-42e5-9a16-4a996f5facf9", "progress": 1, "type": "<none/>", "result": "", "error_info": [ "VDI_COPY_FAILED", "End_of_file" ], "other_config": {}, "subtask_of": "OpaqueRef:NULL", "subtasks": [ "OpaqueRef:e66f3549-776d-4b7c-ae41-98b495d6c046" ], "backtrace": "(((process xapi)(filename ocaml/xapi/xapi_vm_clone.ml)(line 80))((process xapi)(filename list.ml)(line 110))((process xapi)(filename ocaml/xapi/xapi_vm_clone.ml)(line 122))((process xapi)(filename lib/xapi-stdext-pervasives/pervasiveext.ml)(line 24))((process xapi)(filename lib/xapi-stdext-pervasives/pervasiveext.ml)(line 35))((process xapi)(filename ocaml/xapi/xapi_vm_clone.ml)(line 130))((process xapi)(filename ocaml/xapi/xapi_vm_clone.ml)(line 171))((process xapi)(filename ocaml/xapi/xapi_vm_clone.ml)(line 209))((process xapi)(filename ocaml/xapi/xapi_vm_clone.ml)(line 220))((process xapi)(filename list.ml)(line 121))((process xapi)(filename ocaml/xapi/xapi_vm_clone.ml)(line 222))((process xapi)(filename ocaml/xapi/xapi_vm_clone.ml)(line 442))((process xapi)(filename lib/xapi-stdext-pervasives/pervasiveext.ml)(line 24))((process xapi)(filename lib/xapi-stdext-pervasives/pervasiveext.ml)(line 35))((process xapi)(filename ocaml/xapi/xapi_vm.ml)(line 858))((process xapi)(filename lib/xapi-stdext-pervasives/pervasiveext.ml)(line 24))((process xapi)(filename ocaml/xapi/rbac.ml)(line 205))((process xapi)(filename ocaml/xapi/server_helpers.ml)(line 95)))" }, "message": "VDI_COPY_FAILED(End_of_file)", "name": "XapiError", "stack": "XapiError: VDI_COPY_FAILED(End_of_file) at Function.wrap (file:///opt/xo/xo-builds/xen-orchestra-202501091821/packages/xen-api/_XapiError.mjs:16:12) at default (file:///opt/xo/xo-builds/xen-orchestra-202501091821/packages/xen-api/_getTaskResult.mjs:13:29) at Xapi._addRecordToCache (file:///opt/xo/xo-builds/xen-orchestra-202501091821/packages/xen-api/index.mjs:1068:24) at file:///opt/xo/xo-builds/xen-orchestra-202501091821/packages/xen-api/index.mjs:1102:14 at Array.forEach (<anonymous>) at Xapi._processEvents (file:///opt/xo/xo-builds/xen-orchestra-202501091821/packages/xen-api/index.mjs:1092:12) at Xapi._watchEvents (file:///opt/xo/xo-builds/xen-orchestra-202501091821/packages/
-
RE: Migration fails: VDI_COPY_FAILED(End_of_file)
I tried creating a copy instead of migrating the VM with it powered off. First I tried copying it to the remote host but that failed around the same place. I then tried copying it to NFS shared storage which again failed. I then tried copying it to a different disk in the same host where the VM lives and that failed. Finally I tried copying it to the same disk on the same host where it lives and that too failed.
[13:45 marshall log]# tail -f SMlog | grep 388eb055 Jan 10 13:45:53 marshall SM: [13371] ['/usr/sbin/td-util', 'query', 'vhd', '-vpfb', '/var/run/sr-mount/36a86edc-f16e-abad-b8a4-6f7f0f60aad2/388eb055-95b9-45d4-a4b0-517163c9e4c3.vhd'] Jan 10 13:45:53 marshall SM: [13371] vdi_attach {'sr_uuid': '36a86edc-f16e-abad-b8a4-6f7f0f60aad2', 'subtask_of': 'DummyRef:|1fa9d756-b057-4484-8958-f5fcb297a5fd|VDI.attach2', 'vdi_ref': 'OpaqueRef:a83f0b8e-d601-48d5-8f21-cb4987be7ca4', 'vdi_on_boot': 'persist', 'args': ['false'], 'o_direct': False, 'vdi_location': '388eb055-95b9-45d4-a4b0-517163c9e4c3', 'host_ref': 'OpaqueRef:50fd6973-7a56-42e5-9a16-4a996f5facf9', 'session_ref': 'OpaqueRef:fd4f282e-269a-4ccc-bfcb-224ff79b8ab0', 'device_config': {'device': '/dev/disk/by-id/nvme-Samsung_SSD_970_PRO_512GB_S5JYNS0N601288P-part3', 'SRmaster': 'true'}, 'command': 'vdi_attach', 'vdi_allow_caching': 'false', 'sr_ref': 'OpaqueRef:74876ac2-8be1-46eb-a7ba-77503551720e', 'local_cache_sr': '36a86edc-f16e-abad-b8a4-6f7f0f60aad2', 'vdi_uuid': '388eb055-95b9-45d4-a4b0-517163c9e4c3'} Jan 10 13:45:53 marshall SM: [13371] lock: opening lock file /var/lock/sm/388eb055-95b9-45d4-a4b0-517163c9e4c3/vdi Jan 10 13:45:53 marshall SM: [13371] result: {'params_nbd': 'nbd:unix:/run/blktap-control/nbd/36a86edc-f16e-abad-b8a4-6f7f0f60aad2/388eb055-95b9-45d4-a4b0-517163c9e4c3', 'o_direct_reason': 'RO_WITH_NO_PARENT', 'params': '/dev/sm/backend/36a86edc-f16e-abad-b8a4-6f7f0f60aad2/388eb055-95b9-45d4-a4b0-517163c9e4c3', 'o_direct': True, 'xenstore_data': {'scsi/0x12/0x80': 'AIAAEjM4OGViMDU1LTk1YjktNDUgIA==', 'scsi/0x12/0x83': 'AIMAMQIBAC1YRU5TUkMgIDM4OGViMDU1LTk1YjktNDVkNC1hNGIwLTUxNzE2M2M5ZTRjMyA=', 'vdi-uuid': '388eb055-95b9-45d4-a4b0-517163c9e4c3', 'mem-pool': '36a86edc-f16e-abad-b8a4-6f7f0f60aad2'}} Jan 10 13:45:53 marshall SM: [13396] ['/usr/sbin/td-util', 'query', 'vhd', '-vpfb', '/var/run/sr-mount/36a86edc-f16e-abad-b8a4-6f7f0f60aad2/388eb055-95b9-45d4-a4b0-517163c9e4c3.vhd'] Jan 10 13:45:53 marshall SM: [13396] vdi_activate {'sr_uuid': '36a86edc-f16e-abad-b8a4-6f7f0f60aad2', 'subtask_of': 'DummyRef:|8135bc8b-35b4-40cc-b6c8-292ced5ebdb9|VDI.activate', 'vdi_ref': 'OpaqueRef:a83f0b8e-d601-48d5-8f21-cb4987be7ca4', 'vdi_on_boot': 'persist', 'args': ['false'], 'o_direct': False, 'vdi_location': '388eb055-95b9-45d4-a4b0-517163c9e4c3', 'host_ref': 'OpaqueRef:50fd6973-7a56-42e5-9a16-4a996f5facf9', 'session_ref': 'OpaqueRef:ea013ff9-9743-48fe-907e-ae59015f1550', 'device_config': {'device': '/dev/disk/by-id/nvme-Samsung_SSD_970_PRO_512GB_S5JYNS0N601288P-part3', 'SRmaster': 'true'}, 'command': 'vdi_activate', 'vdi_allow_caching': 'false', 'sr_ref': 'OpaqueRef:74876ac2-8be1-46eb-a7ba-77503551720e', 'local_cache_sr': '36a86edc-f16e-abad-b8a4-6f7f0f60aad2', 'vdi_uuid': '388eb055-95b9-45d4-a4b0-517163c9e4c3'} Jan 10 13:45:53 marshall SM: [13396] lock: opening lock file /var/lock/sm/388eb055-95b9-45d4-a4b0-517163c9e4c3/vdi Jan 10 13:45:53 marshall SM: [13396] lock: acquired /var/lock/sm/388eb055-95b9-45d4-a4b0-517163c9e4c3/vdi Jan 10 13:45:53 marshall SM: [13396] Adding tag to: 388eb055-95b9-45d4-a4b0-517163c9e4c3 Jan 10 13:45:53 marshall SM: [13396] ['/usr/sbin/td-util', 'query', 'vhd', '-vpfb', '/var/run/sr-mount/36a86edc-f16e-abad-b8a4-6f7f0f60aad2/388eb055-95b9-45d4-a4b0-517163c9e4c3.vhd'] Jan 10 13:45:53 marshall SM: [13396] PhyLink(/dev/sm/phy/36a86edc-f16e-abad-b8a4-6f7f0f60aad2/388eb055-95b9-45d4-a4b0-517163c9e4c3) -> /var/run/sr-mount/36a86edc-f16e-abad-b8a4-6f7f0f60aad2/388eb055-95b9-45d4-a4b0-517163c9e4c3.vhd Jan 10 13:45:53 marshall SM: [13396] ['/usr/sbin/tap-ctl', 'open', '-p', '13451', '-m', '3', '-a', 'vhd:/var/run/sr-mount/36a86edc-f16e-abad-b8a4-6f7f0f60aad2/388eb055-95b9-45d4-a4b0-517163c9e4c3.vhd', '-R', '-t', '50'] Jan 10 13:45:53 marshall SM: [13396] tap.activate: Launched Tapdisk(vhd:/var/run/sr-mount/36a86edc-f16e-abad-b8a4-6f7f0f60aad2/388eb055-95b9-45d4-a4b0-517163c9e4c3.vhd, pid=13451, minor=3, state=R) Jan 10 13:45:53 marshall SM: [13396] DeviceNode(/dev/sm/backend/36a86edc-f16e-abad-b8a4-6f7f0f60aad2/388eb055-95b9-45d4-a4b0-517163c9e4c3) -> /dev/xen/blktap-2/tapdev3 Jan 10 13:45:53 marshall SM: [13396] NBDLink(/run/blktap-control/nbd/36a86edc-f16e-abad-b8a4-6f7f0f60aad2/388eb055-95b9-45d4-a4b0-517163c9e4c3) -> /run/blktap-control/nbd13451.3 Jan 10 13:45:53 marshall SM: [13396] lock: released /var/lock/sm/388eb055-95b9-45d4-a4b0-517163c9e4c3/vdi Jan 10 13:46:05 marshall SM: [13743] ['/usr/sbin/td-util', 'query', 'vhd', '-vpfb', '/var/run/sr-mount/36a86edc-f16e-abad-b8a4-6f7f0f60aad2/388eb055-95b9-45d4-a4b0-517163c9e4c3.vhd'] Jan 10 13:46:05 marshall SM: [13743] vdi_deactivate {'sr_uuid': '36a86edc-f16e-abad-b8a4-6f7f0f60aad2', 'subtask_of': 'DummyRef:|5ab8f590-253d-4f98-9913-d9893a7ec3d9|VDI.deactivate', 'vdi_ref': 'OpaqueRef:a83f0b8e-d601-48d5-8f21-cb4987be7ca4', 'vdi_on_boot': 'persist', 'args': [], 'o_direct': False, 'vdi_location': '388eb055-95b9-45d4-a4b0-517163c9e4c3', 'host_ref': 'OpaqueRef:50fd6973-7a56-42e5-9a16-4a996f5facf9', 'session_ref': 'OpaqueRef:7120f7bf-5d9e-4229-920d-746753ce4247', 'device_config': {'device': '/dev/disk/by-id/nvme-Samsung_SSD_970_PRO_512GB_S5JYNS0N601288P-part3', 'SRmaster': 'true'}, 'command': 'vdi_deactivate', 'vdi_allow_caching': 'false', 'sr_ref': 'OpaqueRef:74876ac2-8be1-46eb-a7ba-77503551720e', 'local_cache_sr': '36a86edc-f16e-abad-b8a4-6f7f0f60aad2', 'vdi_uuid': '388eb055-95b9-45d4-a4b0-517163c9e4c3'} Jan 10 13:46:05 marshall SM: [13743] lock: opening lock file /var/lock/sm/388eb055-95b9-45d4-a4b0-517163c9e4c3/vdi Jan 10 13:46:05 marshall SM: [13743] lock: acquired /var/lock/sm/388eb055-95b9-45d4-a4b0-517163c9e4c3/vdi Jan 10 13:46:05 marshall SM: [13743] tap.deactivate: Shut down Tapdisk(vhd:/var/run/sr-mount/36a86edc-f16e-abad-b8a4-6f7f0f60aad2/388eb055-95b9-45d4-a4b0-517163c9e4c3.vhd, pid=13451, minor=3, state=R) Jan 10 13:46:05 marshall SM: [13743] ['/usr/sbin/td-util', 'query', 'vhd', '-vpfb', '/var/run/sr-mount/36a86edc-f16e-abad-b8a4-6f7f0f60aad2/388eb055-95b9-45d4-a4b0-517163c9e4c3.vhd'] Jan 10 13:46:05 marshall SM: [13743] Removed host key host_OpaqueRef:50fd6973-7a56-42e5-9a16-4a996f5facf9 for 388eb055-95b9-45d4-a4b0-517163c9e4c3 Jan 10 13:46:05 marshall SM: [13743] lock: released /var/lock/sm/388eb055-95b9-45d4-a4b0-517163c9e4c3/vdi Jan 10 13:46:05 marshall SM: [13809] ['/usr/sbin/td-util', 'query', 'vhd', '-vpfb', '/var/run/sr-mount/36a86edc-f16e-abad-b8a4-6f7f0f60aad2/388eb055-95b9-45d4-a4b0-517163c9e4c3.vhd'] Jan 10 13:46:05 marshall SM: [13809] vdi_detach {'sr_uuid': '36a86edc-f16e-abad-b8a4-6f7f0f60aad2', 'subtask_of': 'DummyRef:|876fedb8-4245-4ff4-ad32-63a9de449a3e|VDI.detach', 'vdi_ref': 'OpaqueRef:a83f0b8e-d601-48d5-8f21-cb4987be7ca4', 'vdi_on_boot': 'persist', 'args': [], 'o_direct': False, 'vdi_location': '388eb055-95b9-45d4-a4b0-517163c9e4c3', 'host_ref': 'OpaqueRef:50fd6973-7a56-42e5-9a16-4a996f5facf9', 'session_ref': 'OpaqueRef:b4212357-6e51-4843-b1ac-0af23381abb5', 'device_config': {'device': '/dev/disk/by-id/nvme-Samsung_SSD_970_PRO_512GB_S5JYNS0N601288P-part3', 'SRmaster': 'true'}, 'command': 'vdi_detach', 'vdi_allow_caching': 'false', 'sr_ref': 'OpaqueRef:74876ac2-8be1-46eb-a7ba-77503551720e', 'local_cache_sr': '36a86edc-f16e-abad-b8a4-6f7f0f60aad2', 'vdi_uuid': '388eb055-95b9-45d4-a4b0-517163c9e4c3'} Jan 10 13:46:05 marshall SM: [13809] lock: opening lock file /var/lock/sm/388eb055-95b9-45d4-a4b0-517163c9e4c3/vdi Jan 10 13:46:06 marshall SMGC: [13852] 388eb055(40.000G/13.491G)
-
RE: Migration fails: VDI_COPY_FAILED(End_of_file)
I think I checked the disk image the right way and it reports it's valid
[13:30 marshall 36a86edc-f16e-abad-b8a4-6f7f0f60aad2]# vhd-util check -n 388eb055-95b9-45d4-a4b0-517163c9e4c3.vhd 388eb055-95b9-45d4-a4b0-517163c9e4c3.vhd is valid
-
RE: Migration fails: VDI_COPY_FAILED(End_of_file)
Not really sure what I'm looking for in that file but this is the last of what I saw tailing the log for the first part of the uuid:
[13:23 marshall log]# tail -f SMlog | grep 388eb055 Jan 10 13:26:42 marshall SM: [27086] ['vhd-util', 'key', '-p', '-n', '/var/run/sr-mount/36a86edc-f16e-abad-b8a4-6f7f0f60aad2/388eb055-95b9-45d4-a4b0-517163c9e4c3.vhd'] Jan 10 13:26:43 marshall SM: [27139] ['/usr/sbin/td-util', 'query', 'vhd', '-vpfb', '/var/run/sr-mount/36a86edc-f16e-abad-b8a4-6f7f0f60aad2/388eb055-95b9-45d4-a4b0-517163c9e4c3.vhd'] Jan 10 13:26:43 marshall SM: [27139] vdi_snapshot {'sr_uuid': '36a86edc-f16e-abad-b8a4-6f7f0f60aad2', 'subtask_of': 'DummyRef:|c333f7d1-6a7c-4ccd-96f6-1f927e55a8e5|VDI.snapshot', 'vdi_ref': 'OpaqueRef:a83f0b8e-d601-48d5-8f21-cb4987be7ca4', 'vdi_on_boot': 'persist', 'args': [], 'o_direct': False, 'vdi_location': '388eb055-95b9-45d4-a4b0-517163c9e4c3', 'host_ref': 'OpaqueRef:50fd6973-7a56-42e5-9a16-4a996f5facf9', 'session_ref': 'OpaqueRef:51ab012b-022f-4069-8121-c60ef2337dc5', 'device_config': {'device': '/dev/disk/by-id/nvme-Samsung_SSD_970_PRO_512GB_S5JYNS0N601288P-part3', 'SRmaster': 'true'}, 'command': 'vdi_snapshot', 'vdi_allow_caching': 'false', 'sr_ref': 'OpaqueRef:74876ac2-8be1-46eb-a7ba-77503551720e', 'local_cache_sr': '36a86edc-f16e-abad-b8a4-6f7f0f60aad2', 'driver_params': {'vhd-parent': '9405bfc7-ee0e-4b63-8c28-c464479f25b6', 'read-caching-enabled-on-18a404bb-dd5c-4795-944b-c10243d44cbb': 'false', 'host_OpaqueRef:50fd6973-7a56-42e5-9a16-4a996f5facf9': 'RW', 'read-caching-reason-18a404bb-dd5c-4795-944b-c10243d44cbb': 'NO_RO_IMAGE', 'mirror': 'null'}, 'vdi_uuid': '388eb055-95b9-45d4-a4b0-517163c9e4c3'} Jan 10 13:26:43 marshall SM: [27139] Pause request for 388eb055-95b9-45d4-a4b0-517163c9e4c3 Jan 10 13:26:43 marshall SM: [27163] lock: opening lock file /var/lock/sm/388eb055-95b9-45d4-a4b0-517163c9e4c3/vdi Jan 10 13:26:43 marshall SM: [27163] lock: acquired /var/lock/sm/388eb055-95b9-45d4-a4b0-517163c9e4c3/vdi Jan 10 13:26:43 marshall SM: [27163] Pause for 388eb055-95b9-45d4-a4b0-517163c9e4c3 Jan 10 13:26:43 marshall SM: [27163] lock: released /var/lock/sm/388eb055-95b9-45d4-a4b0-517163c9e4c3/vdi Jan 10 13:26:43 marshall SM: [27139] FileVDI._snapshot for 388eb055-95b9-45d4-a4b0-517163c9e4c3 (type 2) Jan 10 13:26:43 marshall SM: [27139] ['/usr/bin/vhd-util', 'query', '--debug', '-d', '-n', '/var/run/sr-mount/36a86edc-f16e-abad-b8a4-6f7f0f60aad2/388eb055-95b9-45d4-a4b0-517163c9e4c3.vhd'] Jan 10 13:26:43 marshall SM: [27139] FileVDI._link /var/run/sr-mount/36a86edc-f16e-abad-b8a4-6f7f0f60aad2/388eb055-95b9-45d4-a4b0-517163c9e4c3.vhd to /var/run/sr-mount/36a86edc-f16e-abad-b8a4-6f7f0f60aad2/85a71438-a176-4ced-a8b1-f80cdeb2e821.vhd Jan 10 13:26:43 marshall SM: [27139] ['/usr/sbin/td-util', 'snapshot', 'vhd', '/var/run/sr-mount/36a86edc-f16e-abad-b8a4-6f7f0f60aad2/388eb055-95b9-45d4-a4b0-517163c9e4c3.vhd.new', '85a71438-a176-4ced-a8b1-f80cdeb2e821.vhd'] Jan 10 13:26:43 marshall SM: [27139] FileVDI._rename /var/run/sr-mount/36a86edc-f16e-abad-b8a4-6f7f0f60aad2/388eb055-95b9-45d4-a4b0-517163c9e4c3.vhd.new to /var/run/sr-mount/36a86edc-f16e-abad-b8a4-6f7f0f60aad2/388eb055-95b9-45d4-a4b0-517163c9e4c3.vhd Jan 10 13:26:43 marshall SM: [27139] ['/usr/sbin/td-util', 'query', 'vhd', '-p', '/var/run/sr-mount/36a86edc-f16e-abad-b8a4-6f7f0f60aad2/388eb055-95b9-45d4-a4b0-517163c9e4c3.vhd'] Jan 10 13:26:43 marshall SM: [27139] Unpause request for 388eb055-95b9-45d4-a4b0-517163c9e4c3 secondary=null Jan 10 13:26:43 marshall SM: [27192] lock: opening lock file /var/lock/sm/388eb055-95b9-45d4-a4b0-517163c9e4c3/vdi Jan 10 13:26:43 marshall SM: [27192] lock: acquired /var/lock/sm/388eb055-95b9-45d4-a4b0-517163c9e4c3/vdi Jan 10 13:26:43 marshall SM: [27192] Unpause for 388eb055-95b9-45d4-a4b0-517163c9e4c3 Jan 10 13:26:43 marshall SM: [27192] Realpath: /var/run/sr-mount/36a86edc-f16e-abad-b8a4-6f7f0f60aad2/388eb055-95b9-45d4-a4b0-517163c9e4c3.vhd Jan 10 13:26:43 marshall SM: [27192] ['/usr/sbin/td-util', 'query', 'vhd', '-vpfb', '/var/run/sr-mount/36a86edc-f16e-abad-b8a4-6f7f0f60aad2/388eb055-95b9-45d4-a4b0-517163c9e4c3.vhd'] Jan 10 13:26:43 marshall SM: [27192] ['/usr/sbin/tap-ctl', 'unpause', '-p', '25627', '-m', '3', '-2', 'null', '-a', 'vhd:/var/run/sr-mount/36a86edc-f16e-abad-b8a4-6f7f0f60aad2/388eb055-95b9-45d4-a4b0-517163c9e4c3.vhd'] Jan 10 13:26:43 marshall SM: [27192] lock: released /var/lock/sm/388eb055-95b9-45d4-a4b0-517163c9e4c3/vdi Jan 10 13:26:48 marshall SM: [27437] ['/usr/sbin/td-util', 'query', 'vhd', '-vpfb', '/var/run/sr-mount/36a86edc-f16e-abad-b8a4-6f7f0f60aad2/388eb055-95b9-45d4-a4b0-517163c9e4c3.vhd'] Jan 10 13:26:48 marshall SM: [27437] vdi_deactivate {'sr_uuid': '36a86edc-f16e-abad-b8a4-6f7f0f60aad2', 'subtask_of': 'DummyRef:|93944e89-c639-4e21-bf1a-d8efbff43859|VDI.deactivate', 'vdi_ref': 'OpaqueRef:a83f0b8e-d601-48d5-8f21-cb4987be7ca4', 'vdi_on_boot': 'persist', 'args': [], 'o_direct': False, 'vdi_location': '388eb055-95b9-45d4-a4b0-517163c9e4c3', 'host_ref': 'OpaqueRef:50fd6973-7a56-42e5-9a16-4a996f5facf9', 'session_ref': 'OpaqueRef:d6e64cae-3365-448b-86fa-f4e0d6cc6ae1', 'device_config': {'device': '/dev/disk/by-id/nvme-Samsung_SSD_970_PRO_512GB_S5JYNS0N601288P-part3', 'SRmaster': 'true'}, 'command': 'vdi_deactivate', 'vdi_allow_caching': 'false', 'sr_ref': 'OpaqueRef:74876ac2-8be1-46eb-a7ba-77503551720e', 'local_cache_sr': '36a86edc-f16e-abad-b8a4-6f7f0f60aad2', 'vdi_uuid': '388eb055-95b9-45d4-a4b0-517163c9e4c3'} Jan 10 13:26:48 marshall SM: [27437] lock: opening lock file /var/lock/sm/388eb055-95b9-45d4-a4b0-517163c9e4c3/vdi Jan 10 13:26:48 marshall SM: [27437] lock: acquired /var/lock/sm/388eb055-95b9-45d4-a4b0-517163c9e4c3/vdi Jan 10 13:26:49 marshall SM: [27437] tap.deactivate: Shut down Tapdisk(vhd:/var/run/sr-mount/36a86edc-f16e-abad-b8a4-6f7f0f60aad2/388eb055-95b9-45d4-a4b0-517163c9e4c3.vhd, pid=25627, minor=3, state=R) Jan 10 13:26:49 marshall SM: [27437] ['/usr/sbin/td-util', 'query', 'vhd', '-vpfb', '/var/run/sr-mount/36a86edc-f16e-abad-b8a4-6f7f0f60aad2/388eb055-95b9-45d4-a4b0-517163c9e4c3.vhd'] Jan 10 13:26:49 marshall SM: [27437] Removed host key host_OpaqueRef:50fd6973-7a56-42e5-9a16-4a996f5facf9 for 388eb055-95b9-45d4-a4b0-517163c9e4c3 Jan 10 13:26:49 marshall SM: [27437] lock: released /var/lock/sm/388eb055-95b9-45d4-a4b0-517163c9e4c3/vdi Jan 10 13:26:49 marshall SM: [27485] ['/usr/sbin/td-util', 'query', 'vhd', '-vpfb', '/var/run/sr-mount/36a86edc-f16e-abad-b8a4-6f7f0f60aad2/388eb055-95b9-45d4-a4b0-517163c9e4c3.vhd'] Jan 10 13:26:49 marshall SM: [27485] vdi_detach {'sr_uuid': '36a86edc-f16e-abad-b8a4-6f7f0f60aad2', 'subtask_of': 'DummyRef:|383e83b2-b860-4f25-8de1-29d310e3e0c8|VDI.detach', 'vdi_ref': 'OpaqueRef:a83f0b8e-d601-48d5-8f21-cb4987be7ca4', 'vdi_on_boot': 'persist', 'args': [], 'o_direct': False, 'vdi_location': '388eb055-95b9-45d4-a4b0-517163c9e4c3', 'host_ref': 'OpaqueRef:50fd6973-7a56-42e5-9a16-4a996f5facf9', 'session_ref': 'OpaqueRef:cd5df917-b502-4b0e-975b-deac99e09852', 'device_config': {'device': '/dev/disk/by-id/nvme-Samsung_SSD_970_PRO_512GB_S5JYNS0N601288P-part3', 'SRmaster': 'true'}, 'command': 'vdi_detach', 'vdi_allow_caching': 'false', 'sr_ref': 'OpaqueRef:74876ac2-8be1-46eb-a7ba-77503551720e', 'local_cache_sr': '36a86edc-f16e-abad-b8a4-6f7f0f60aad2', 'vdi_uuid': '388eb055-95b9-45d4-a4b0-517163c9e4c3'} Jan 10 13:26:49 marshall SM: [27485] lock: opening lock file /var/lock/sm/388eb055-95b9-45d4-a4b0-517163c9e4c3/vdi
-
RE: Migration fails: VDI_COPY_FAILED(End_of_file)
@bigdweeb All hosts are running 8.2.1 with all patches applied too.
-
Migration fails: VDI_COPY_FAILED(End_of_file)
I have a Xen Orchestra built from source running and it's up to date with the latest in the source tree. I have an existing cluster of 3 Intel NUC8 machines running, each with local disks. I have a cluster of 3 new Intel C3000 Atom based boxes running that I am migrating to.
I have migrated nearly all VMs from the old NUCs to the new Atom boxes through XO by powering down the VM, doing the migration, and powering it back up in the new pool. All these VMs have been nearly identical Debian 12 machines built around the same time, the same way, with my PXE server.
I have found two machines that I cannot migrate and I get the "VDI_COPY_FAILED(End_of_file) This is a XenServer/XCP-ng error" message when I try to migrate both of them. The VM I am concentrated on fails between 19-20% every time. After I noticed this pattern, I tried migrating it between two hosts in the old pool and it also fails at the same point. I also have a shared NFS disk setup and I tried migrating it to an existing pool member but moving it to that disk instead and it still fails at the same point.
This is the output I get when I download the log message.
vm.migrate { "vm": "7a6434bc-8dce-40e0-879f-a739a072f99a", "mapVifsNetworks": { "6766e3df-853c-4858-39f3-2779a20981c0": "8479000e-03b9-602e-9e55-694b72385ad1" }, "migrationNetwork": "540ad943-73ed-1886-9982-42148210c761", "sr": "46624e47-07d9-317f-983b-c416fedfb73f", "targetHost": "69473b68-bd12-4c92-a470-55ae4c0cb259" } { "code": "VDI_COPY_FAILED", "params": [ "End_of_file" ], "task": { "uuid": "62cc8e05-74ad-f550-5df2-90f4c3dce3c1", "name_label": "Async.VM.migrate_send", "name_description": "", "allowed_operations": [], "current_operations": {}, "created": "20250110T12:24:30Z", "finished": "20250110T12:25:55Z", "status": "failure", "resident_on": "OpaqueRef:50fd6973-7a56-42e5-9a16-4a996f5facf9", "progress": 1, "type": "<none/>", "result": "", "error_info": [ "VDI_COPY_FAILED", "End_of_file" ], "other_config": {}, "subtask_of": "OpaqueRef:NULL", "subtasks": [], "backtrace": "(((process xapi)(filename ocaml/xapi/xapi_vm_migrate.ml)(line 1564))((process xapi)(filename lib/xapi-stdext-pervasives/pervasiveext.ml)(line 24))((process xapi)(filename lib/xapi-stdext-pervasives/pervasiveext.ml)(line 35))((process xapi)(filename ocaml/xapi/message_forwarding.ml)(line 131))((process xapi)(filename ocaml/xapi/message_forwarding.ml)(line 1228))((process xapi)(filename lib/xapi-stdext-pervasives/pervasiveext.ml)(line 24))((process xapi)(filename lib/xapi-stdext-pervasives/pervasiveext.ml)(line 35))((process xapi)(filename ocaml/xapi/message_forwarding.ml)(line 2298))((process xapi)(filename lib/xapi-stdext-pervasives/pervasiveext.ml)(line 24))((process xapi)(filename ocaml/xapi/rbac.ml)(line 205))((process xapi)(filename ocaml/xapi/server_helpers.ml)(line 95)))" }, "message": "VDI_COPY_FAILED(End_of_file)", "name": "XapiError", "stack": "XapiError: VDI_COPY_FAILED(End_of_file) at Function.wrap (file:///opt/xo/xo-builds/xen-orchestra-202501091821/packages/xen-api/_XapiError.mjs:16:12) at default (file:///opt/xo/xo-builds/xen-orchestra-202501091821/packages/xen-api/_getTaskResult.mjs:13:29) at Xapi._addRecordToCache (file:///opt/xo/xo-builds/xen-orchestra-202501091821/packages/xen-api/index.mjs:1068:24) at file:///opt/xo/xo-builds/xen-orchestra-202501091821/packages/xen-api/index.mjs:1102:14 at Array.forEach (<anonymous>) at Xapi._processEvents (file:///opt/xo/xo-builds/xen-orchestra-202501091821/packages/xen-api/index.mjs:1092:12) at Xapi._watchEvents (file:///opt/xo/xo-builds/xen-orchestra-202501091821/packages/xen-api/index.mjs:1265:14)" }%
-
RE: XO-Lite and Let's Encrypt certificate
@hellst0rm I don't see why it wouldn't work as long as everything is set to trust your internal CA root. It's pretty trivial to create a CA. I did it years ago because I can't stand cert errors and as a learning exercise. The only annoying part is it's just one more thing to keep track of and update, but if you're running a homelab it just becomes another part of the exercise.
-
RE: XO-Lite and Let's Encrypt certificate
I run my own CA internally and ran into the same issue when I tried using XO Lite for the first time. The cert worked for connecting to the pool master, but I think the consoles wouldn't work and definitely connecting to VMs on different members of the pool didn't work without accepting the cert error my browser was reporting. I figured out that those calls seem to be happening by IP instead of name, so I just regenerated my certs with the hostname and ip as SANs on the certs of all my hosts and it resolved the issue for me too. The way to work around it would be for XO Lite to make those calls to other resources by hostname instead of IP from what I can tell without digging into it any further than I did.
-
RE: Drivers for recent homelab NICs in XCP-ng 8.2
@stormi any chance of adding Realtek r8156? I started working on it a while go but couldn't quite get it to work.
-
RE: lost access to vm
Ok, I was able to fix this. I migrated it to the pool master on the cli. I don't think this was necessary in hindsight but I just wanted to be sure I knew where it was running. I then brought up xo-lite in Chrome on the pool master. At that point I could view the VM in the embedded console. It turned out that the root partition had errors on it and was stuck in Initramfs waiting for fsck to be run. Once I repaired the errors with fsck, the VM booted and I'm back in business.
-
lost access to vm
I have 3 NUCs in a pool, and a TrueNAS system serving NFS for a couple VMs. One of those VMs is my XOA server. I recently applied an available patch for TrueNAS but forgot to power down the XOA server beforehand. Afterwards I saw the filesystem on the Debian guest was read-only and realized what had happened.
I tried to resolve this by powering down the VM, but I was only able to do this by forcing the power off. I brought the VM back up and I can no longer ping it. I've tried everything I can think of, including restarting one of the hosts in the pool and restarting the XOA VM on that host in case the issue was the host NFS mount. None of this has worked. Any ideas on how to get this back up and running?
-
RE: Xen-Orchestra build from source - Debian 11.7
@Mcvills ah, cool. Well at least that fix is there in case anyone else stumbles upon what I was hitting.
-
RE: Xen-Orchestra build from source - Debian 11.7
if you do an 'apt-get update' are you getting a GPG error about the yarn repository signature being invalid? If so, maybe try this fix posted by boeboe on Feb 11th. I kept seeing the error every time I updated the system I have xo built on and that finally cleared it up for me. I can't tell you if that would have prevented me from doing the initial build though because mine has been up for some time now.
-
RE: VMware VMC analog?
My particular setup may be a bit weird but here's a better description. I have my primary pool of servers running at home, and XOA runs there. I have a site-to-site VPN between home and the remote location so I can reach the remote host which is also managed through my XOA instance through the VPN tunnel. I don't need any proxies for this to work because I setup a VPN tunnel and XOA and the remote host have direct IP connectivity.
On the remote host, I have a VM that runs in an isolated network not reachable from home, so I cannot connect directly to the guest with VNC. If I'm connected to my home network I can see this guest through XOA. If however I connect to the VPN service at the remote end (different than the site-to-site tunnel) to get access to normal resources on the remote end and also want to use my test VM it becomes difficult to reach the VM because I can't reach my XOA instance back at home easily.
I'm not sure a proxy would solve this. I followed the instructions to test XO Lite and that looks like it is sort of what I am looking for. When I log into my remote host I can see the dashboard and the list of guests, but I don't see the console for the guests. Is the console supposed to be present or is the plan for it to be exposed in the future? If so, that would solve my problem.
-
VMware VMC analog?
Re: Is there something like VMRC (VMware Remote Console) for XCP-ng?
I found this old topic that sort of matches what I'm looking for but I don't know if its quite what I'm looking for.
I have several XCP-ng servers setup and they're being managed through XOA. All this is working properly. Nearly all of my VMs are just handling a workload of sorts that I either ssh into or connect to via the web just as you'd expect for any normal server.
I have an edge case that I also need to solve for. In a couple cases I have a VM running in an isolated network for testing purposes. The host is in a network that I can route to, but the guest is intentionally not and adding routing to allow direct connections is either not possible or would invalidate what I'm trying to test. Right now, the only way I can figure out how to manage these instances is through the XOA web console. This works, but with ESXi I could either do this or have a direct link to the VM through the VMRC client that I could use. It was a nice feature to have because I could put that URI into a wiki and click on it to launch the client and reach the VM.
Is there any way of replicating that functionality with XCP-ng through a VNC session? I'd assume my VNC client would have to point to the host where the VM is running on a certain port and the host would have to know to present the virtual console through to it.