Failed import from ESXi 7
-
Hello,
I'll give a
briefdescription before I get in to the details.I am migrating away from ESXi and have migrated over 20 VMs with no issue. For the last 3 days I have been trying to migrate one of my biggest VMs. Every time it gets to the end it fails. I have tried different SRs, and even installed a larger HDD to make sure it was not a space issue. I have spent days trying to figure out the details, if I was running in to limitations, or if there was a bug. I think this may be a bug. I have tried the import from XO from source and XOA. I know there is a 2TB limit on VDIs, but the largest VDI is 1.8TB. I am importing a VM with a 40GB, a 400GB, and a 1.8TB disk. The machine I am importing it to is the same machine it was running on before. I migrated to an old Dell R710 and then back to my current machine, and then I moved my remaining ESXi disks to a similar machine with the same architecture but less RAM to finish the migration. All other imports have been successful in all scenarios except this one.
I have found one post with a similar error, but it was from a long time ago and is not quite the same as this scenario. Every time it fails after 18 hours of importing with "Error: already finalized or destroyed". I may have messed this process up. All of my spinning rust is passed trough to a NAS with a hba, so my biggest drive is/was 2TB. During the import I moved the 2 small disks as they finished importing so the other disk would have room on the 2TB NVME I was importing to. That failed with "Error: already finalized or destroyed". Could this be a database error probably caused by my actions?
Relevant Details:
ESXi 7.0 Free EditionXCP-ng 8.3 All patches installed
12 core CPU with 128GB RAMVM is an Ubuntu OS with a 40GB, a 400GB and a 1.8TB drive. It has 6 vCPUs and 24GB RAM. 2.23 TB total provisioned storage. Nothing special beyond that.
SR to import to is a 3TB HDD with 2.69 TB free.
Here is the relevant log info from XOA. I can post more if needed.
Also, it would be a nice feature to be able to choose multiple SRs when a VM has multiple disks.
vm.importMultipleFromEsxi { "concurrency": 2, "host": "192.168.1.1", "network": "obfuscated ", "password": "* obfuscated *", "sr": "b588531a-0ea4-beba-fa6f-94ef1b6c16cf", "sslVerify": false, "stopOnError": true, "stopSource": false, "user": "root", "vms": [ "31" ] } { "succeeded": {}, "message": "already finalized or destroyed", "name": "Error", "stack": "Error: already finalized or destroyed at Pack.entry (/usr/local/lib/node_modules/xo-server/node_modules/tar-stream/pack.js:138:51) at Pack.resolver (/usr/local/lib/node_modules/xo-server/node_modules/promise-toolbox/fromCallback.js:5:6) at Promise._execute (/usr/local/lib/node_modules/xo-server/node_modules/bluebird/js/release/debuggability.js:384:9) at Promise._resolveFromExecutor (/usr/local/lib/node_modules/xo-server/node_modules/bluebird/js/release/promise.js:518:18) at new Promise (/usr/local/lib/node_modules/xo-server/node_modules/bluebird/js/release/promise.js:103:10) at Pack.fromCallback (/usr/local/lib/node_modules/xo-server/node_modules/promise-toolbox/fromCallback.js:9:10) at addEntry (file:///usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/xva/_writeDisk.mjs:9:22) at writeBlock (file:///usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/xva/_writeDisk.mjs:16:9) at addDisk (file:///usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/xva/_writeDisk.mjs:46:13) at importVm (file:///usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/xva/importVm.mjs:22:5) at importVdi (file:///usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/xva/importVdi.mjs:6:17) at file:///usr/local/lib/node_modules/xo-server/src/xo-mixins/migrate-vm.mjs:260:21 at Task.runInside (/usr/local/lib/node_modules/xo-server/node_modules/@vates/task/index.js:158:22) at Task.run (/usr/local/lib/node_modules/xo-server/node_modules/@vates/task/index.js:141:20)" }
{ "id": "glz2e29kyn5", "properties": { "name": "creating MV on XCP side" }, "start": 1719114046873, "status": "success", "end": 1719114047010, "result": { "uuid": "d7d1921f-8ea1-2e89-3076-9e8538b4644b", "allowed_operations": [ "create_vtpm", "changing_NVRAM", "changing_dynamic_range", "changing_shadow_memory", "changing_static_range", "make_into_template", "migrate_send", "destroy", "export", "start_on", "start", "clone", "copy", "snapshot" ], "current_operations": {}, "name_label": "TMP", "name_description": "from esxi", "power_state": "Halted", "user_version": 1, "is_a_template": false, "is_default_template": false, "suspend_VDI": "OpaqueRef:NULL", "resident_on": "OpaqueRef:NULL", "scheduled_to_be_resident_on": "OpaqueRef:NULL", "affinity": "OpaqueRef:NULL", "memory_overhead": 209715200, "memory_target": 0, "memory_static_max": 25769803776, "memory_dynamic_max": 25769803776, "memory_dynamic_min": 25769803776, "memory_static_min": 25769803776, "VCPUs_params": {}, "VCPUs_max": 6, "VCPUs_at_startup": 6, "actions_after_softreboot": "soft_reboot", "actions_after_shutdown": "destroy", "actions_after_reboot": "restart", "actions_after_crash": "restart", "consoles": [], "VIFs": [], "VBDs": [], "VUSBs": [], "crash_dumps": [], "VTPMs": [], "PV_bootloader": "", "PV_kernel": "", "PV_ramdisk": "", "PV_args": "", "PV_bootloader_args": "", "PV_legacy_args": "", "HVM_boot_policy": "BIOS order", "HVM_boot_params": { "order": "cdn" }, "HVM_shadow_multiplier": 1, "platform": { "timeoffset": "0", "nx": "true", "acpi": "1", "apic": "true", "pae": "true", "hpet": "true", "viridian": "true" }, "PCI_bus": "", "other_config": { "mac_seed": "13d4db27-21db-27c6-e7aa-d6ca9fd8d8bc", "vgpu_pci": "", "base_template_name": "Other install media", "install-methods": "cdrom" }, "domid": -1, "domarch": "", "last_boot_CPU_flags": {}, "is_control_domain": false, "metrics": "OpaqueRef:c7b0462c-7756-874c-6cc0-ff0492327058", "guest_metrics": "OpaqueRef:NULL", "last_booted_record": "", "recommendations": "<restrictions><restriction field=\"memory-static-max\" max=\"137438953472\" /><restriction field=\"vcpus-max\" max=\"32\" /><restriction property=\"number-of-vbds\" max=\"255\" /><restriction property=\"number-of-vifs\" max=\"7\" /><restriction field=\"has-vendor-device\" value=\"false\" /></restrictions>", "xenstore_data": {}, "ha_always_run": false, "ha_restart_priority": "", "is_a_snapshot": false, "snapshot_of": "OpaqueRef:NULL", "snapshots": [], "snapshot_time": "19700101T00:00:00Z", "transportable_snapshot_id": "", "blobs": {}, "tags": [], "blocked_operations": {}, "snapshot_info": {}, "snapshot_metadata": "", "parent": "OpaqueRef:NULL", "children": [], "bios_strings": {}, "protection_policy": "OpaqueRef:NULL", "is_snapshot_from_vmpp": false, "snapshot_schedule": "OpaqueRef:NULL", "is_vmss_snapshot": false, "appliance": "OpaqueRef:NULL", "start_delay": 0, "shutdown_delay": 0, "order": 0, "VGPUs": [], "attached_PCIs": [], "suspend_SR": "OpaqueRef:NULL", "version": 0, "generation_id": "0:0", "hardware_platform_version": 0, "has_vendor_device": false, "requires_reboot": false, "reference_label": "", "domain_type": "hvm", "NVRAM": {}, "pending_guidances": [], "pending_guidances_recommended": [], "pending_guidances_full": [] } }, { "id": "62izzqtkya8", "properties": { "name": "Cold import of disks scsi0:0" }, "start": 1719114047011, "status": "success", "end": 1719117922575, "result": { "vdi": { "uuid": "357545bc-ce1a-4ba6-9d7a-1ef56c1ffcc5", "name_label": "[ESXI]TMP-flat.vmdk", "name_description": "fromESXI from esxi", "allowed_operations": [ "generate_config", "update", "forget", "destroy", "snapshot", "resize", "copy", "clone" ], "current_operations": {}, "SR": "OpaqueRef:40407e0d-38b8-7700-66da-409354ac529c", "VBDs": [], "crash_dumps": [], "virtual_size": 42949672960, "physical_utilisation": 40222175232, "type": "user", "sharable": false, "read_only": false, "other_config": { "content_id": "1cac051a-c82a-794e-de58-bc9bbf8447bc" }, "storage_lock": false, "location": "357545bc-ce1a-4ba6-9d7a-1ef56c1ffcc5", "managed": true, "missing": false, "parent": "OpaqueRef:NULL", "xenstore_data": {}, "sm_config": {}, "is_a_snapshot": false, "snapshot_of": "OpaqueRef:NULL", "snapshots": [], "snapshot_time": "19700101T00:00:00Z", "tags": [], "allow_caching": false, "on_boot": "persist", "metadata_of_pool": "", "metadata_latest": false, "is_tools_iso": false, "cbt_enabled": false }, "vhd": { "ref": "Ref:002", "label": "TMP-flat.vmdk" } } }, { "id": "lbvwvbm7ybm", "properties": { "name": "Cold import of disks scsi0:3" }, "start": 1719114047013, "status": "success", "end": 1719138095863, "result": { "vdi": { "uuid": "92faddcd-e9c4-4405-b3d7-f53482ae0896", "name_label": "[ESXI]TMP_3-flat.vmdk", "name_description": "fromESXI from esxi", "allowed_operations": [ "generate_config", "update", "forget", "destroy", "snapshot", "resize", "copy", "clone" ], "current_operations": {}, "SR": "OpaqueRef:40407e0d-38b8-7700-66da-409354ac529c", "VBDs": [], "crash_dumps": [], "virtual_size": 429496729600, "physical_utilisation": 317442682880, "type": "user", "sharable": false, "read_only": false, "other_config": { "content_id": "9163a597-98ae-f6f3-ad38-d9f0d7144141" }, "storage_lock": false, "location": "92faddcd-e9c4-4405-b3d7-f53482ae0896", "managed": true, "missing": false, "parent": "OpaqueRef:NULL", "xenstore_data": {}, "sm_config": {}, "is_a_snapshot": false, "snapshot_of": "OpaqueRef:NULL", "snapshots": [], "snapshot_time": "19700101T00:00:00Z", "tags": [], "allow_caching": false, "on_boot": "persist", "metadata_of_pool": "", "metadata_latest": false, "is_tools_iso": false, "cbt_enabled": false }, "vhd": { "ref": "Ref:002", "label": "TMP_3-flat.vmdk" } } }, { "id": "3zwzw4wfpzx", "properties": { "name": "Cold import of disks scsi0:4" }, "start": 1719114047014, "status": "failure", "end": 1719177071580, "result": { "message": "already finalized or destroyed", "name": "Error", "stack": "Error: already finalized or destroyed\n at Pack.entry (/usr/local/lib/node_modules/xo-server/node_modules/tar-stream/pack.js:138:51)\n at Pack.resolver (/usr/local/lib/node_modules/xo-server/node_modules/promise-toolbox/fromCallback.js:5:6)\n at Promise._execute (/usr/local/lib/node_modules/xo-server/node_modules/bluebird/js/release/debuggability.js:384:9)\n at Promise._resolveFromExecutor (/usr/local/lib/node_modules/xo-server/node_modules/bluebird/js/release/promise.js:518:18)\n at new Promise (/usr/local/lib/node_modules/xo-server/node_modules/bluebird/js/release/promise.js:103:10)\n at Pack.fromCallback (/usr/local/lib/node_modules/xo-server/node_modules/promise-toolbox/fromCallback.js:9:10)\n at addEntry (file:///usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/xva/_writeDisk.mjs:9:22)\n at writeBlock (file:///usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/xva/_writeDisk.mjs:16:9)\n at addDisk (file:///usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/xva/_writeDisk.mjs:46:13)\n at importVm (file:///usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/xva/importVm.mjs:22:5)\n at importVdi (file:///usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/xva/importVdi.mjs:6:17)\n at file:///usr/local/lib/node_modules/xo-server/src/xo-mixins/migrate-vm.mjs:260:21\n at Task.runInside (/usr/local/lib/node_modules/xo-server/node_modules/@vates/task/index.js:158:22)\n at Task.run (/usr/local/lib/node_modules/xo-server/node_modules/@vates/task/index.js:141:20)" } } ], "end": 1719177072500, "result": { "message": "already finalized or destroyed", "name": "Error", "stack": "Error: already finalized or destroyed\n at Pack.entry (/usr/local/lib/node_modules/xo-server/node_modules/tar-stream/pack.js:138:51)\n at Pack.resolver (/usr/local/lib/node_modules/xo-server/node_modules/promise-toolbox/fromCallback.js:5:6)\n at Promise._execute (/usr/local/lib/node_modules/xo-server/node_modules/bluebird/js/release/debuggability.js:384:9)\n at Promise._resolveFromExecutor (/usr/local/lib/node_modules/xo-server/node_modules/bluebird/js/release/promise.js:518:18)\n at new Promise (/usr/local/lib/node_modules/xo-server/node_modules/bluebird/js/release/promise.js:103:10)\n at Pack.fromCallback (/usr/local/lib/node_modules/xo-server/node_modules/promise-toolbox/fromCallback.js:9:10)\n at addEntry (file:///usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/xva/_writeDisk.mjs:9:22)\n at writeBlock (file:///usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/xva/_writeDisk.mjs:16:9)\n at addDisk (file:///usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/xva/_writeDisk.mjs:46:13)\n at importVm (file:///usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/xva/importVm.mjs:22:5)\n at importVdi (file:///usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/xva/importVdi.mjs:6:17)\n at file:///usr/local/lib/node_modules/xo-server/src/xo-mixins/migrate-vm.mjs:260:21\n at Task.runInside (/usr/local/lib/node_modules/xo-server/node_modules/@vates/task/index.js:158:22)\n at Task.run (/usr/local/lib/node_modules/xo-server/node_modules/@vates/task/index.js:141:20)" } } ], "end": 1719177072501, "result": { "succeeded": {}, "message": "already finalized or destroyed", "name": "Error", "stack": "Error: already finalized or destroyed\n at Pack.entry (/usr/local/lib/node_modules/xo-server/node_modules/tar-stream/pack.js:138:51)\n at Pack.resolver (/usr/local/lib/node_modules/xo-server/node_modules/promise-toolbox/fromCallback.js:5:6)\n at Promise._execute (/usr/local/lib/node_modules/xo-server/node_modules/bluebird/js/release/debuggability.js:384:9)\n at Promise._resolveFromExecutor (/usr/local/lib/node_modules/xo-server/node_modules/bluebird/js/release/promise.js:518:18)\n at new Promise (/usr/local/lib/node_modules/xo-server/node_modules/bluebird/js/release/promise.js:103:10)\n at Pack.fromCallback (/usr/local/lib/node_modules/xo-server/node_modules/promise-toolbox/fromCallback.js:9:10)\n at addEntry (file:///usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/xva/_writeDisk.mjs:9:22)\n at writeBlock (file:///usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/xva/_writeDisk.mjs:16:9)\n at addDisk (file:///usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/xva/_writeDisk.mjs:46:13)\n at importVm (file:///usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/xva/importVm.mjs:22:5)\n at importVdi (file:///usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/xva/importVdi.mjs:6:17)\n at file:///usr/local/lib/node_modules/xo-server/src/xo-mixins/migrate-vm.mjs:260:21\n at Task.runInside (/usr/local/lib/node_modules/xo-server/node_modules/@vates/task/index.js:158:22)\n at Task.run (/usr/local/lib/node_modules/xo-server/node_modules/@vates/task/index.js:141:20)" } }
-
@BGDev Has the space been consumed (or some portion of it) for this 1.8TB?
-
Hello Dustin,
Thank you for helping troubleshoot the issue.
If you mean how much of the disk is used, 1,6TB of the 1.8TB are used. 226 GB are free on the virtual disk. The disk is thick provisioned on ESXi if that makes a difference. The 3TB HDD is ext and thin. The 400GB drive is also thick provisioned and imports fine, so I don't think that is an issue.
-
@BGDev What I'm trying to determine is if there is already an amount of space consumed on your xcp-ng pool's SR that would correlate with the disk in question.
You mentioned a 3TB disk (and above you mentioned the 2TB limit) was this a typo?
-
As I am aware, there is a limit on the size of guest VDI. So VMs can have a max disk size of 2TB. The 3TB HDD is completely empty with 2.7TB free after format. So in whole, the VM has a total of 2.23TB in 3 disks, and is being imported to a 3TB disk/SR with 2.69TB free.
I hope this clarifies this for you.
-
@BGDev Okay, so you have 3 disks that you're importing from VMWare, and the largest is 1.8TB (thick provisioned).
In the logs above there is the error message
Error: already finalized or destroyed
. Does this 1.8TB disk already exist on your XCP-ng pool? -
No, when the import completes, all 3 disks are deleted from the SR. The SR is empty.
Edit: the 2 smaller drives import and can be interacted with because they finish much sooner. But when the last one finished, they all disappear and the SR is empty.
-
Here is the sr-list output of the 3TB SR.
uuid ( RO) : b588531a-0ea4-beba-fa6f-94ef1b6c16cf name-label ( RW): HDD_3T name-description ( RW): Old 3TB HDD host ( RO): xcp-ng-main allowed-operations (SRO): VDI.enable_cbt; VDI.list_changed_blocks; unplug; plug; PBD.create; VDI.disable_cbt; update; PBD.destroy; VDI.resize; VDI.clone; VDI.data_destroy; scan; VDI.snapshot; VDI.mirror; VDI.create; VDI.destroy; VDI.set_on_boot current-operations (SRO): VDIs (SRO): PBDs (SRO): d340fec0-3c1c-d808-906e-856674c3da46 virtual-allocation ( RO): 0 physical-utilisation ( RO): 2125824 physical-size ( RO): 2952313094144 type ( RO): ext content-type ( RO): user shared ( RW): false introduced-by ( RO): <not in database> is-tools-sr ( RO): false other-config (MRW): auto-scan: true sm-config (MRO): devserial: scsi-350014ee20ab0a29a blobs ( RO): local-cache-enabled ( RO): false tags (SRW): clustered ( RO): false
-
@BGDev As a thought, you can export the 1.8TB drive and manually import it to your environment using Xen Orchestra (rather than attempting to export the VM as a whole)?
Once imported you would simply attach it to the VM.
-
Thank you for the suggestion.
If I am going to manually import the data, I will just remove the 1.8TB virtual disk in ESXi and import the VM. Then take out a 10GB NIC from one of my servers and just transfer the files to a fresh new disk. My issue is that I don't have any drives big enough to hold that drive for the export easily. I have a few options to get the data transferred, but if this is a bug I would like to help get it addressed.
Again, thank you for the help troubleshooting the issue and your suggestions to work around the issue.
-
@BGDev Sorry just had a thought. Is this 1.8TB drive for a fileshare etc?
If so, why not create a new drive on XCP-ng and simply use a copy operation to move the individual files and permissions over?
-
That is my plan if I don't find a XCP-ng based solution. I will just scp the data over to a new drive. I have lost so much time that I am just letting it run as is and figuring out the best path forward. When I have a bit of time I will settle on the best path to take. I am still learning XCP-ng and would like to learn more as well as possibly help iron out a bug if this is one. If this is a bug it is best to leave everything intact as it is so that I can troubleshoot it more.
Thank you for the sugestions.
-
@BGDev I'm not certain you're encountering a bug or a network issue (or something else).
Given that you have a working production workload on ESXi and a Production XCP-ng environment, what I personally would suggest is to replication the data from the old to the new, schedule a final cut over day and perform a final replication.
If this was a database drive or something that help system files then I'd dig further into why its not exporting successfully, but since it seems like its just a file share, replicating the individual files would be the simplest approach.