XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    Migration fails: VDI_COPY_FAILED(End_of_file)

    Scheduled Pinned Locked Moved Xen Orchestra
    10 Posts 2 Posters 257 Views 1 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • B Offline
      bigdweeb
      last edited by

      I have a Xen Orchestra built from source running and it's up to date with the latest in the source tree. I have an existing cluster of 3 Intel NUC8 machines running, each with local disks. I have a cluster of 3 new Intel C3000 Atom based boxes running that I am migrating to.

      I have migrated nearly all VMs from the old NUCs to the new Atom boxes through XO by powering down the VM, doing the migration, and powering it back up in the new pool. All these VMs have been nearly identical Debian 12 machines built around the same time, the same way, with my PXE server.

      I have found two machines that I cannot migrate and I get the "VDI_COPY_FAILED(End_of_file) This is a XenServer/XCP-ng error" message when I try to migrate both of them. The VM I am concentrated on fails between 19-20% every time. After I noticed this pattern, I tried migrating it between two hosts in the old pool and it also fails at the same point. I also have a shared NFS disk setup and I tried migrating it to an existing pool member but moving it to that disk instead and it still fails at the same point.

      This is the output I get when I download the log message.

      vm.migrate
      {
        "vm": "7a6434bc-8dce-40e0-879f-a739a072f99a",
        "mapVifsNetworks": {
          "6766e3df-853c-4858-39f3-2779a20981c0": "8479000e-03b9-602e-9e55-694b72385ad1"
        },
        "migrationNetwork": "540ad943-73ed-1886-9982-42148210c761",
        "sr": "46624e47-07d9-317f-983b-c416fedfb73f",
        "targetHost": "69473b68-bd12-4c92-a470-55ae4c0cb259"
      }
      {
        "code": "VDI_COPY_FAILED",
        "params": [
          "End_of_file"
        ],
        "task": {
          "uuid": "62cc8e05-74ad-f550-5df2-90f4c3dce3c1",
          "name_label": "Async.VM.migrate_send",
          "name_description": "",
          "allowed_operations": [],
          "current_operations": {},
          "created": "20250110T12:24:30Z",
          "finished": "20250110T12:25:55Z",
          "status": "failure",
          "resident_on": "OpaqueRef:50fd6973-7a56-42e5-9a16-4a996f5facf9",
          "progress": 1,
          "type": "<none/>",
          "result": "",
          "error_info": [
            "VDI_COPY_FAILED",
            "End_of_file"
          ],
          "other_config": {},
          "subtask_of": "OpaqueRef:NULL",
          "subtasks": [],
          "backtrace": "(((process xapi)(filename ocaml/xapi/xapi_vm_migrate.ml)(line 1564))((process xapi)(filename lib/xapi-stdext-pervasives/pervasiveext.ml)(line 24))((process xapi)(filename lib/xapi-stdext-pervasives/pervasiveext.ml)(line 35))((process xapi)(filename ocaml/xapi/message_forwarding.ml)(line 131))((process xapi)(filename ocaml/xapi/message_forwarding.ml)(line 1228))((process xapi)(filename lib/xapi-stdext-pervasives/pervasiveext.ml)(line 24))((process xapi)(filename lib/xapi-stdext-pervasives/pervasiveext.ml)(line 35))((process xapi)(filename ocaml/xapi/message_forwarding.ml)(line 2298))((process xapi)(filename lib/xapi-stdext-pervasives/pervasiveext.ml)(line 24))((process xapi)(filename ocaml/xapi/rbac.ml)(line 205))((process xapi)(filename ocaml/xapi/server_helpers.ml)(line 95)))"
        },
        "message": "VDI_COPY_FAILED(End_of_file)",
        "name": "XapiError",
        "stack": "XapiError: VDI_COPY_FAILED(End_of_file)
          at Function.wrap (file:///opt/xo/xo-builds/xen-orchestra-202501091821/packages/xen-api/_XapiError.mjs:16:12)
          at default (file:///opt/xo/xo-builds/xen-orchestra-202501091821/packages/xen-api/_getTaskResult.mjs:13:29)
          at Xapi._addRecordToCache (file:///opt/xo/xo-builds/xen-orchestra-202501091821/packages/xen-api/index.mjs:1068:24)
          at file:///opt/xo/xo-builds/xen-orchestra-202501091821/packages/xen-api/index.mjs:1102:14
          at Array.forEach (<anonymous>)
          at Xapi._processEvents (file:///opt/xo/xo-builds/xen-orchestra-202501091821/packages/xen-api/index.mjs:1092:12)
          at Xapi._watchEvents (file:///opt/xo/xo-builds/xen-orchestra-202501091821/packages/xen-api/index.mjs:1265:14)"
      }%  
      
      B 1 Reply Last reply Reply Quote 0
      • B Offline
        bigdweeb @bigdweeb
        last edited by

        @bigdweeb All hosts are running 8.2.1 with all patches applied too.

        DanpD 1 Reply Last reply Reply Quote 0
        • DanpD Offline
          Danp Pro Support Team @bigdweeb
          last edited by

          @bigdweeb My guess is that the VDI chain contains some corruption. You may want to check SMlog to see if it contains additional details. You could also use vhd-util to check for corruption.

          B 1 Reply Last reply Reply Quote 0
          • B Offline
            bigdweeb
            last edited by

            Not really sure what I'm looking for in that file but this is the last of what I saw tailing the log for the first part of the uuid:

            [13:23 marshall log]# tail -f SMlog | grep 388eb055
            Jan 10 13:26:42 marshall SM: [27086] ['vhd-util', 'key', '-p', '-n', '/var/run/sr-mount/36a86edc-f16e-abad-b8a4-6f7f0f60aad2/388eb055-95b9-45d4-a4b0-517163c9e4c3.vhd']
            Jan 10 13:26:43 marshall SM: [27139] ['/usr/sbin/td-util', 'query', 'vhd', '-vpfb', '/var/run/sr-mount/36a86edc-f16e-abad-b8a4-6f7f0f60aad2/388eb055-95b9-45d4-a4b0-517163c9e4c3.vhd']
            Jan 10 13:26:43 marshall SM: [27139] vdi_snapshot {'sr_uuid': '36a86edc-f16e-abad-b8a4-6f7f0f60aad2', 'subtask_of': 'DummyRef:|c333f7d1-6a7c-4ccd-96f6-1f927e55a8e5|VDI.snapshot', 'vdi_ref': 'OpaqueRef:a83f0b8e-d601-48d5-8f21-cb4987be7ca4', 'vdi_on_boot': 'persist', 'args': [], 'o_direct': False, 'vdi_location': '388eb055-95b9-45d4-a4b0-517163c9e4c3', 'host_ref': 'OpaqueRef:50fd6973-7a56-42e5-9a16-4a996f5facf9', 'session_ref': 'OpaqueRef:51ab012b-022f-4069-8121-c60ef2337dc5', 'device_config': {'device': '/dev/disk/by-id/nvme-Samsung_SSD_970_PRO_512GB_S5JYNS0N601288P-part3', 'SRmaster': 'true'}, 'command': 'vdi_snapshot', 'vdi_allow_caching': 'false', 'sr_ref': 'OpaqueRef:74876ac2-8be1-46eb-a7ba-77503551720e', 'local_cache_sr': '36a86edc-f16e-abad-b8a4-6f7f0f60aad2', 'driver_params': {'vhd-parent': '9405bfc7-ee0e-4b63-8c28-c464479f25b6', 'read-caching-enabled-on-18a404bb-dd5c-4795-944b-c10243d44cbb': 'false', 'host_OpaqueRef:50fd6973-7a56-42e5-9a16-4a996f5facf9': 'RW', 'read-caching-reason-18a404bb-dd5c-4795-944b-c10243d44cbb': 'NO_RO_IMAGE', 'mirror': 'null'}, 'vdi_uuid': '388eb055-95b9-45d4-a4b0-517163c9e4c3'}
            Jan 10 13:26:43 marshall SM: [27139] Pause request for 388eb055-95b9-45d4-a4b0-517163c9e4c3
            Jan 10 13:26:43 marshall SM: [27163] lock: opening lock file /var/lock/sm/388eb055-95b9-45d4-a4b0-517163c9e4c3/vdi
            Jan 10 13:26:43 marshall SM: [27163] lock: acquired /var/lock/sm/388eb055-95b9-45d4-a4b0-517163c9e4c3/vdi
            Jan 10 13:26:43 marshall SM: [27163] Pause for 388eb055-95b9-45d4-a4b0-517163c9e4c3
            Jan 10 13:26:43 marshall SM: [27163] lock: released /var/lock/sm/388eb055-95b9-45d4-a4b0-517163c9e4c3/vdi
            Jan 10 13:26:43 marshall SM: [27139] FileVDI._snapshot for 388eb055-95b9-45d4-a4b0-517163c9e4c3 (type 2)
            Jan 10 13:26:43 marshall SM: [27139] ['/usr/bin/vhd-util', 'query', '--debug', '-d', '-n', '/var/run/sr-mount/36a86edc-f16e-abad-b8a4-6f7f0f60aad2/388eb055-95b9-45d4-a4b0-517163c9e4c3.vhd']
            Jan 10 13:26:43 marshall SM: [27139] FileVDI._link /var/run/sr-mount/36a86edc-f16e-abad-b8a4-6f7f0f60aad2/388eb055-95b9-45d4-a4b0-517163c9e4c3.vhd to /var/run/sr-mount/36a86edc-f16e-abad-b8a4-6f7f0f60aad2/85a71438-a176-4ced-a8b1-f80cdeb2e821.vhd
            Jan 10 13:26:43 marshall SM: [27139] ['/usr/sbin/td-util', 'snapshot', 'vhd', '/var/run/sr-mount/36a86edc-f16e-abad-b8a4-6f7f0f60aad2/388eb055-95b9-45d4-a4b0-517163c9e4c3.vhd.new', '85a71438-a176-4ced-a8b1-f80cdeb2e821.vhd']
            Jan 10 13:26:43 marshall SM: [27139] FileVDI._rename /var/run/sr-mount/36a86edc-f16e-abad-b8a4-6f7f0f60aad2/388eb055-95b9-45d4-a4b0-517163c9e4c3.vhd.new to /var/run/sr-mount/36a86edc-f16e-abad-b8a4-6f7f0f60aad2/388eb055-95b9-45d4-a4b0-517163c9e4c3.vhd
            Jan 10 13:26:43 marshall SM: [27139] ['/usr/sbin/td-util', 'query', 'vhd', '-p', '/var/run/sr-mount/36a86edc-f16e-abad-b8a4-6f7f0f60aad2/388eb055-95b9-45d4-a4b0-517163c9e4c3.vhd']
            Jan 10 13:26:43 marshall SM: [27139] Unpause request for 388eb055-95b9-45d4-a4b0-517163c9e4c3 secondary=null
            Jan 10 13:26:43 marshall SM: [27192] lock: opening lock file /var/lock/sm/388eb055-95b9-45d4-a4b0-517163c9e4c3/vdi
            Jan 10 13:26:43 marshall SM: [27192] lock: acquired /var/lock/sm/388eb055-95b9-45d4-a4b0-517163c9e4c3/vdi
            Jan 10 13:26:43 marshall SM: [27192] Unpause for 388eb055-95b9-45d4-a4b0-517163c9e4c3
            Jan 10 13:26:43 marshall SM: [27192] Realpath: /var/run/sr-mount/36a86edc-f16e-abad-b8a4-6f7f0f60aad2/388eb055-95b9-45d4-a4b0-517163c9e4c3.vhd
            Jan 10 13:26:43 marshall SM: [27192] ['/usr/sbin/td-util', 'query', 'vhd', '-vpfb', '/var/run/sr-mount/36a86edc-f16e-abad-b8a4-6f7f0f60aad2/388eb055-95b9-45d4-a4b0-517163c9e4c3.vhd']
            Jan 10 13:26:43 marshall SM: [27192] ['/usr/sbin/tap-ctl', 'unpause', '-p', '25627', '-m', '3', '-2', 'null', '-a', 'vhd:/var/run/sr-mount/36a86edc-f16e-abad-b8a4-6f7f0f60aad2/388eb055-95b9-45d4-a4b0-517163c9e4c3.vhd']
            Jan 10 13:26:43 marshall SM: [27192] lock: released /var/lock/sm/388eb055-95b9-45d4-a4b0-517163c9e4c3/vdi
            Jan 10 13:26:48 marshall SM: [27437] ['/usr/sbin/td-util', 'query', 'vhd', '-vpfb', '/var/run/sr-mount/36a86edc-f16e-abad-b8a4-6f7f0f60aad2/388eb055-95b9-45d4-a4b0-517163c9e4c3.vhd']
            Jan 10 13:26:48 marshall SM: [27437] vdi_deactivate {'sr_uuid': '36a86edc-f16e-abad-b8a4-6f7f0f60aad2', 'subtask_of': 'DummyRef:|93944e89-c639-4e21-bf1a-d8efbff43859|VDI.deactivate', 'vdi_ref': 'OpaqueRef:a83f0b8e-d601-48d5-8f21-cb4987be7ca4', 'vdi_on_boot': 'persist', 'args': [], 'o_direct': False, 'vdi_location': '388eb055-95b9-45d4-a4b0-517163c9e4c3', 'host_ref': 'OpaqueRef:50fd6973-7a56-42e5-9a16-4a996f5facf9', 'session_ref': 'OpaqueRef:d6e64cae-3365-448b-86fa-f4e0d6cc6ae1', 'device_config': {'device': '/dev/disk/by-id/nvme-Samsung_SSD_970_PRO_512GB_S5JYNS0N601288P-part3', 'SRmaster': 'true'}, 'command': 'vdi_deactivate', 'vdi_allow_caching': 'false', 'sr_ref': 'OpaqueRef:74876ac2-8be1-46eb-a7ba-77503551720e', 'local_cache_sr': '36a86edc-f16e-abad-b8a4-6f7f0f60aad2', 'vdi_uuid': '388eb055-95b9-45d4-a4b0-517163c9e4c3'}
            Jan 10 13:26:48 marshall SM: [27437] lock: opening lock file /var/lock/sm/388eb055-95b9-45d4-a4b0-517163c9e4c3/vdi
            Jan 10 13:26:48 marshall SM: [27437] lock: acquired /var/lock/sm/388eb055-95b9-45d4-a4b0-517163c9e4c3/vdi
            Jan 10 13:26:49 marshall SM: [27437] tap.deactivate: Shut down Tapdisk(vhd:/var/run/sr-mount/36a86edc-f16e-abad-b8a4-6f7f0f60aad2/388eb055-95b9-45d4-a4b0-517163c9e4c3.vhd, pid=25627, minor=3, state=R)
            Jan 10 13:26:49 marshall SM: [27437] ['/usr/sbin/td-util', 'query', 'vhd', '-vpfb', '/var/run/sr-mount/36a86edc-f16e-abad-b8a4-6f7f0f60aad2/388eb055-95b9-45d4-a4b0-517163c9e4c3.vhd']
            Jan 10 13:26:49 marshall SM: [27437] Removed host key host_OpaqueRef:50fd6973-7a56-42e5-9a16-4a996f5facf9 for 388eb055-95b9-45d4-a4b0-517163c9e4c3
            Jan 10 13:26:49 marshall SM: [27437] lock: released /var/lock/sm/388eb055-95b9-45d4-a4b0-517163c9e4c3/vdi
            Jan 10 13:26:49 marshall SM: [27485] ['/usr/sbin/td-util', 'query', 'vhd', '-vpfb', '/var/run/sr-mount/36a86edc-f16e-abad-b8a4-6f7f0f60aad2/388eb055-95b9-45d4-a4b0-517163c9e4c3.vhd']
            Jan 10 13:26:49 marshall SM: [27485] vdi_detach {'sr_uuid': '36a86edc-f16e-abad-b8a4-6f7f0f60aad2', 'subtask_of': 'DummyRef:|383e83b2-b860-4f25-8de1-29d310e3e0c8|VDI.detach', 'vdi_ref': 'OpaqueRef:a83f0b8e-d601-48d5-8f21-cb4987be7ca4', 'vdi_on_boot': 'persist', 'args': [], 'o_direct': False, 'vdi_location': '388eb055-95b9-45d4-a4b0-517163c9e4c3', 'host_ref': 'OpaqueRef:50fd6973-7a56-42e5-9a16-4a996f5facf9', 'session_ref': 'OpaqueRef:cd5df917-b502-4b0e-975b-deac99e09852', 'device_config': {'device': '/dev/disk/by-id/nvme-Samsung_SSD_970_PRO_512GB_S5JYNS0N601288P-part3', 'SRmaster': 'true'}, 'command': 'vdi_detach', 'vdi_allow_caching': 'false', 'sr_ref': 'OpaqueRef:74876ac2-8be1-46eb-a7ba-77503551720e', 'local_cache_sr': '36a86edc-f16e-abad-b8a4-6f7f0f60aad2', 'vdi_uuid': '388eb055-95b9-45d4-a4b0-517163c9e4c3'}
            Jan 10 13:26:49 marshall SM: [27485] lock: opening lock file /var/lock/sm/388eb055-95b9-45d4-a4b0-517163c9e4c3/vdi
            
            1 Reply Last reply Reply Quote 0
            • B Offline
              bigdweeb
              last edited by

              I think I checked the disk image the right way and it reports it's valid

              [13:30 marshall 36a86edc-f16e-abad-b8a4-6f7f0f60aad2]# vhd-util check -n 388eb055-95b9-45d4-a4b0-517163c9e4c3.vhd 
              388eb055-95b9-45d4-a4b0-517163c9e4c3.vhd is valid
              
              1 Reply Last reply Reply Quote 0
              • B Offline
                bigdweeb
                last edited by

                I tried creating a copy instead of migrating the VM with it powered off. First I tried copying it to the remote host but that failed around the same place. I then tried copying it to NFS shared storage which again failed. I then tried copying it to a different disk in the same host where the VM lives and that failed. Finally I tried copying it to the same disk on the same host where it lives and that too failed.

                [13:45 marshall log]# tail -f SMlog | grep 388eb055
                Jan 10 13:45:53 marshall SM: [13371] ['/usr/sbin/td-util', 'query', 'vhd', '-vpfb', '/var/run/sr-mount/36a86edc-f16e-abad-b8a4-6f7f0f60aad2/388eb055-95b9-45d4-a4b0-517163c9e4c3.vhd']
                Jan 10 13:45:53 marshall SM: [13371] vdi_attach {'sr_uuid': '36a86edc-f16e-abad-b8a4-6f7f0f60aad2', 'subtask_of': 'DummyRef:|1fa9d756-b057-4484-8958-f5fcb297a5fd|VDI.attach2', 'vdi_ref': 'OpaqueRef:a83f0b8e-d601-48d5-8f21-cb4987be7ca4', 'vdi_on_boot': 'persist', 'args': ['false'], 'o_direct': False, 'vdi_location': '388eb055-95b9-45d4-a4b0-517163c9e4c3', 'host_ref': 'OpaqueRef:50fd6973-7a56-42e5-9a16-4a996f5facf9', 'session_ref': 'OpaqueRef:fd4f282e-269a-4ccc-bfcb-224ff79b8ab0', 'device_config': {'device': '/dev/disk/by-id/nvme-Samsung_SSD_970_PRO_512GB_S5JYNS0N601288P-part3', 'SRmaster': 'true'}, 'command': 'vdi_attach', 'vdi_allow_caching': 'false', 'sr_ref': 'OpaqueRef:74876ac2-8be1-46eb-a7ba-77503551720e', 'local_cache_sr': '36a86edc-f16e-abad-b8a4-6f7f0f60aad2', 'vdi_uuid': '388eb055-95b9-45d4-a4b0-517163c9e4c3'}
                Jan 10 13:45:53 marshall SM: [13371] lock: opening lock file /var/lock/sm/388eb055-95b9-45d4-a4b0-517163c9e4c3/vdi
                Jan 10 13:45:53 marshall SM: [13371] result: {'params_nbd': 'nbd:unix:/run/blktap-control/nbd/36a86edc-f16e-abad-b8a4-6f7f0f60aad2/388eb055-95b9-45d4-a4b0-517163c9e4c3', 'o_direct_reason': 'RO_WITH_NO_PARENT', 'params': '/dev/sm/backend/36a86edc-f16e-abad-b8a4-6f7f0f60aad2/388eb055-95b9-45d4-a4b0-517163c9e4c3', 'o_direct': True, 'xenstore_data': {'scsi/0x12/0x80': 'AIAAEjM4OGViMDU1LTk1YjktNDUgIA==', 'scsi/0x12/0x83': 'AIMAMQIBAC1YRU5TUkMgIDM4OGViMDU1LTk1YjktNDVkNC1hNGIwLTUxNzE2M2M5ZTRjMyA=', 'vdi-uuid': '388eb055-95b9-45d4-a4b0-517163c9e4c3', 'mem-pool': '36a86edc-f16e-abad-b8a4-6f7f0f60aad2'}}
                Jan 10 13:45:53 marshall SM: [13396] ['/usr/sbin/td-util', 'query', 'vhd', '-vpfb', '/var/run/sr-mount/36a86edc-f16e-abad-b8a4-6f7f0f60aad2/388eb055-95b9-45d4-a4b0-517163c9e4c3.vhd']
                Jan 10 13:45:53 marshall SM: [13396] vdi_activate {'sr_uuid': '36a86edc-f16e-abad-b8a4-6f7f0f60aad2', 'subtask_of': 'DummyRef:|8135bc8b-35b4-40cc-b6c8-292ced5ebdb9|VDI.activate', 'vdi_ref': 'OpaqueRef:a83f0b8e-d601-48d5-8f21-cb4987be7ca4', 'vdi_on_boot': 'persist', 'args': ['false'], 'o_direct': False, 'vdi_location': '388eb055-95b9-45d4-a4b0-517163c9e4c3', 'host_ref': 'OpaqueRef:50fd6973-7a56-42e5-9a16-4a996f5facf9', 'session_ref': 'OpaqueRef:ea013ff9-9743-48fe-907e-ae59015f1550', 'device_config': {'device': '/dev/disk/by-id/nvme-Samsung_SSD_970_PRO_512GB_S5JYNS0N601288P-part3', 'SRmaster': 'true'}, 'command': 'vdi_activate', 'vdi_allow_caching': 'false', 'sr_ref': 'OpaqueRef:74876ac2-8be1-46eb-a7ba-77503551720e', 'local_cache_sr': '36a86edc-f16e-abad-b8a4-6f7f0f60aad2', 'vdi_uuid': '388eb055-95b9-45d4-a4b0-517163c9e4c3'}
                Jan 10 13:45:53 marshall SM: [13396] lock: opening lock file /var/lock/sm/388eb055-95b9-45d4-a4b0-517163c9e4c3/vdi
                Jan 10 13:45:53 marshall SM: [13396] lock: acquired /var/lock/sm/388eb055-95b9-45d4-a4b0-517163c9e4c3/vdi
                Jan 10 13:45:53 marshall SM: [13396] Adding tag to: 388eb055-95b9-45d4-a4b0-517163c9e4c3
                Jan 10 13:45:53 marshall SM: [13396] ['/usr/sbin/td-util', 'query', 'vhd', '-vpfb', '/var/run/sr-mount/36a86edc-f16e-abad-b8a4-6f7f0f60aad2/388eb055-95b9-45d4-a4b0-517163c9e4c3.vhd']
                Jan 10 13:45:53 marshall SM: [13396] PhyLink(/dev/sm/phy/36a86edc-f16e-abad-b8a4-6f7f0f60aad2/388eb055-95b9-45d4-a4b0-517163c9e4c3) -> /var/run/sr-mount/36a86edc-f16e-abad-b8a4-6f7f0f60aad2/388eb055-95b9-45d4-a4b0-517163c9e4c3.vhd
                Jan 10 13:45:53 marshall SM: [13396] ['/usr/sbin/tap-ctl', 'open', '-p', '13451', '-m', '3', '-a', 'vhd:/var/run/sr-mount/36a86edc-f16e-abad-b8a4-6f7f0f60aad2/388eb055-95b9-45d4-a4b0-517163c9e4c3.vhd', '-R', '-t', '50']
                Jan 10 13:45:53 marshall SM: [13396] tap.activate: Launched Tapdisk(vhd:/var/run/sr-mount/36a86edc-f16e-abad-b8a4-6f7f0f60aad2/388eb055-95b9-45d4-a4b0-517163c9e4c3.vhd, pid=13451, minor=3, state=R)
                Jan 10 13:45:53 marshall SM: [13396] DeviceNode(/dev/sm/backend/36a86edc-f16e-abad-b8a4-6f7f0f60aad2/388eb055-95b9-45d4-a4b0-517163c9e4c3) -> /dev/xen/blktap-2/tapdev3
                Jan 10 13:45:53 marshall SM: [13396] NBDLink(/run/blktap-control/nbd/36a86edc-f16e-abad-b8a4-6f7f0f60aad2/388eb055-95b9-45d4-a4b0-517163c9e4c3) -> /run/blktap-control/nbd13451.3
                Jan 10 13:45:53 marshall SM: [13396] lock: released /var/lock/sm/388eb055-95b9-45d4-a4b0-517163c9e4c3/vdi
                Jan 10 13:46:05 marshall SM: [13743] ['/usr/sbin/td-util', 'query', 'vhd', '-vpfb', '/var/run/sr-mount/36a86edc-f16e-abad-b8a4-6f7f0f60aad2/388eb055-95b9-45d4-a4b0-517163c9e4c3.vhd']
                Jan 10 13:46:05 marshall SM: [13743] vdi_deactivate {'sr_uuid': '36a86edc-f16e-abad-b8a4-6f7f0f60aad2', 'subtask_of': 'DummyRef:|5ab8f590-253d-4f98-9913-d9893a7ec3d9|VDI.deactivate', 'vdi_ref': 'OpaqueRef:a83f0b8e-d601-48d5-8f21-cb4987be7ca4', 'vdi_on_boot': 'persist', 'args': [], 'o_direct': False, 'vdi_location': '388eb055-95b9-45d4-a4b0-517163c9e4c3', 'host_ref': 'OpaqueRef:50fd6973-7a56-42e5-9a16-4a996f5facf9', 'session_ref': 'OpaqueRef:7120f7bf-5d9e-4229-920d-746753ce4247', 'device_config': {'device': '/dev/disk/by-id/nvme-Samsung_SSD_970_PRO_512GB_S5JYNS0N601288P-part3', 'SRmaster': 'true'}, 'command': 'vdi_deactivate', 'vdi_allow_caching': 'false', 'sr_ref': 'OpaqueRef:74876ac2-8be1-46eb-a7ba-77503551720e', 'local_cache_sr': '36a86edc-f16e-abad-b8a4-6f7f0f60aad2', 'vdi_uuid': '388eb055-95b9-45d4-a4b0-517163c9e4c3'}
                Jan 10 13:46:05 marshall SM: [13743] lock: opening lock file /var/lock/sm/388eb055-95b9-45d4-a4b0-517163c9e4c3/vdi
                Jan 10 13:46:05 marshall SM: [13743] lock: acquired /var/lock/sm/388eb055-95b9-45d4-a4b0-517163c9e4c3/vdi
                Jan 10 13:46:05 marshall SM: [13743] tap.deactivate: Shut down Tapdisk(vhd:/var/run/sr-mount/36a86edc-f16e-abad-b8a4-6f7f0f60aad2/388eb055-95b9-45d4-a4b0-517163c9e4c3.vhd, pid=13451, minor=3, state=R)
                Jan 10 13:46:05 marshall SM: [13743] ['/usr/sbin/td-util', 'query', 'vhd', '-vpfb', '/var/run/sr-mount/36a86edc-f16e-abad-b8a4-6f7f0f60aad2/388eb055-95b9-45d4-a4b0-517163c9e4c3.vhd']
                Jan 10 13:46:05 marshall SM: [13743] Removed host key host_OpaqueRef:50fd6973-7a56-42e5-9a16-4a996f5facf9 for 388eb055-95b9-45d4-a4b0-517163c9e4c3
                Jan 10 13:46:05 marshall SM: [13743] lock: released /var/lock/sm/388eb055-95b9-45d4-a4b0-517163c9e4c3/vdi
                Jan 10 13:46:05 marshall SM: [13809] ['/usr/sbin/td-util', 'query', 'vhd', '-vpfb', '/var/run/sr-mount/36a86edc-f16e-abad-b8a4-6f7f0f60aad2/388eb055-95b9-45d4-a4b0-517163c9e4c3.vhd']
                Jan 10 13:46:05 marshall SM: [13809] vdi_detach {'sr_uuid': '36a86edc-f16e-abad-b8a4-6f7f0f60aad2', 'subtask_of': 'DummyRef:|876fedb8-4245-4ff4-ad32-63a9de449a3e|VDI.detach', 'vdi_ref': 'OpaqueRef:a83f0b8e-d601-48d5-8f21-cb4987be7ca4', 'vdi_on_boot': 'persist', 'args': [], 'o_direct': False, 'vdi_location': '388eb055-95b9-45d4-a4b0-517163c9e4c3', 'host_ref': 'OpaqueRef:50fd6973-7a56-42e5-9a16-4a996f5facf9', 'session_ref': 'OpaqueRef:b4212357-6e51-4843-b1ac-0af23381abb5', 'device_config': {'device': '/dev/disk/by-id/nvme-Samsung_SSD_970_PRO_512GB_S5JYNS0N601288P-part3', 'SRmaster': 'true'}, 'command': 'vdi_detach', 'vdi_allow_caching': 'false', 'sr_ref': 'OpaqueRef:74876ac2-8be1-46eb-a7ba-77503551720e', 'local_cache_sr': '36a86edc-f16e-abad-b8a4-6f7f0f60aad2', 'vdi_uuid': '388eb055-95b9-45d4-a4b0-517163c9e4c3'}
                Jan 10 13:46:05 marshall SM: [13809] lock: opening lock file /var/lock/sm/388eb055-95b9-45d4-a4b0-517163c9e4c3/vdi
                Jan 10 13:46:06 marshall SMGC: [13852]         388eb055(40.000G/13.491G)
                
                1 Reply Last reply Reply Quote 0
                • B Offline
                  bigdweeb
                  last edited by

                  vm.copy
                  {
                    "vm": "7a6434bc-8dce-40e0-879f-a739a072f99a",
                    "sr": "36a86edc-f16e-abad-b8a4-6f7f0f60aad2",
                    "name": "DNS0_COPY"
                  }
                  {
                    "code": "VDI_COPY_FAILED",
                    "params": [
                      "End_of_file"
                    ],
                    "task": {
                      "uuid": "2a25d3bc-b820-0971-d59a-b415dda0dc45",
                      "name_label": "Async.VM.copy",
                      "name_description": "",
                      "allowed_operations": [],
                      "current_operations": {},
                      "created": "20250110T18:45:52Z",
                      "finished": "20250110T18:46:06Z",
                      "status": "failure",
                      "resident_on": "OpaqueRef:50fd6973-7a56-42e5-9a16-4a996f5facf9",
                      "progress": 1,
                      "type": "<none/>",
                      "result": "",
                      "error_info": [
                        "VDI_COPY_FAILED",
                        "End_of_file"
                      ],
                      "other_config": {},
                      "subtask_of": "OpaqueRef:NULL",
                      "subtasks": [
                        "OpaqueRef:e66f3549-776d-4b7c-ae41-98b495d6c046"
                      ],
                      "backtrace": "(((process xapi)(filename ocaml/xapi/xapi_vm_clone.ml)(line 80))((process xapi)(filename list.ml)(line 110))((process xapi)(filename ocaml/xapi/xapi_vm_clone.ml)(line 122))((process xapi)(filename lib/xapi-stdext-pervasives/pervasiveext.ml)(line 24))((process xapi)(filename lib/xapi-stdext-pervasives/pervasiveext.ml)(line 35))((process xapi)(filename ocaml/xapi/xapi_vm_clone.ml)(line 130))((process xapi)(filename ocaml/xapi/xapi_vm_clone.ml)(line 171))((process xapi)(filename ocaml/xapi/xapi_vm_clone.ml)(line 209))((process xapi)(filename ocaml/xapi/xapi_vm_clone.ml)(line 220))((process xapi)(filename list.ml)(line 121))((process xapi)(filename ocaml/xapi/xapi_vm_clone.ml)(line 222))((process xapi)(filename ocaml/xapi/xapi_vm_clone.ml)(line 442))((process xapi)(filename lib/xapi-stdext-pervasives/pervasiveext.ml)(line 24))((process xapi)(filename lib/xapi-stdext-pervasives/pervasiveext.ml)(line 35))((process xapi)(filename ocaml/xapi/xapi_vm.ml)(line 858))((process xapi)(filename lib/xapi-stdext-pervasives/pervasiveext.ml)(line 24))((process xapi)(filename ocaml/xapi/rbac.ml)(line 205))((process xapi)(filename ocaml/xapi/server_helpers.ml)(line 95)))"
                    },
                    "message": "VDI_COPY_FAILED(End_of_file)",
                    "name": "XapiError",
                    "stack": "XapiError: VDI_COPY_FAILED(End_of_file)
                      at Function.wrap (file:///opt/xo/xo-builds/xen-orchestra-202501091821/packages/xen-api/_XapiError.mjs:16:12)
                      at default (file:///opt/xo/xo-builds/xen-orchestra-202501091821/packages/xen-api/_getTaskResult.mjs:13:29)
                      at Xapi._addRecordToCache (file:///opt/xo/xo-builds/xen-orchestra-202501091821/packages/xen-api/index.mjs:1068:24)
                      at file:///opt/xo/xo-builds/xen-orchestra-202501091821/packages/xen-api/index.mjs:1102:14
                      at Array.forEach (<anonymous>)
                      at Xapi._processEvents (file:///opt/xo/xo-builds/xen-orchestra-202501091821/packages/xen-api/index.mjs:1092:12)
                      at Xapi._watchEvents (file:///opt/xo/xo-builds/xen-orchestra-202501091821/packages/
                  
                  DanpD 1 Reply Last reply Reply Quote 0
                  • B Offline
                    bigdweeb @Danp
                    last edited by

                    @Danp any clues from the logs I posted?

                    1 Reply Last reply Reply Quote 0
                    • DanpD Offline
                      Danp Pro Support Team @bigdweeb
                      last edited by

                      @bigdweeb said in Migration fails: VDI_COPY_FAILED(End_of_file):

                      "message": "VDI_COPY_FAILED(End_of_file)",

                      This is the only thing that that I saw as problematic. If you have backups, then that is your best option to recover this VM. If you don't have backups, then the data must not be that important, right? 😉 J/K

                      You could try copying the data from inside the VM using dd to transfer the data between VDI.

                      B 1 Reply Last reply Reply Quote 0
                      • B Offline
                        bigdweeb @Danp
                        last edited by

                        @Danp I got around this by using Clonezilla to copy the disks of the 2 VMs I was having an issue with. This is very strange. The VMs booted and ran fine on the old host, I was able to clone them with Clonezilla to the new host where they also boot and run fine. There is something preventing the native tools from migrating these specific disks though.

                        1 Reply Last reply Reply Quote 1
                        • First post
                          Last post