Import from VMware fails after upgrade to XOA 5.91
-
FYi, jumping into various commits is simply a
git checkout <target commit>
-
@acomav said in Import from VMware fails after upgrade to XOA 5.91:
redid the job with a snapshot from a running VM to a local SR. Same issue occurred at the same time.
I too did the same thing; it didn't work for me either.
He is what did work:
- Ran quick deploy
https://xen-orchestra.com/#!/xoa - It installs an older version (don't remember the version).
- I had it do one upgrade which takes it to XOA 5.90.
- Once it did that, it let me do the import from esxi.
- Ran quick deploy
-
5.90 is the current
stable
release channel, without the speed improvement (and the bug). We'll have a patch release for 5.91 (currentlatest
) that will solve it -
@olivierlambert
Thanks! -
@khicks : that is a great new
@archw @jasonmap @rmaclachlan I pushed a new commit to the branch fix_xva_import_thin , alignining the last block to exactly 1MB . Could you test if the imports are working now ?
For those who have an XOA and want to help, please open a tunnel and send me the tunnel yb chat ( not directly in this topic) , and I will patch your appliance
FOr those who use XO from the source , you'll need to change branch
-
Over the weekend I spun up an XO instance from source. This morning I changed to 'fix_xva_import_thin' after your post. Unfortunately, still the same failure for me. Only notable difference I see is that your ${str} addition for the log now comes back as " undefined"
Here are my logs:
From XO:
vm.importMultipleFromEsxi { "concurrency": 2, "host": "vsphere.nest.local", "network": "7f7d2fcc-c78b-b1c9-101a-0ca9570e3462", "password": "* obfuscated *", "sr": "50d8f945-8ae4-dd87-0149-e6054a10d51f", "sslVerify": false, "stopOnError": true, "stopSource": true, "user": "administrator@vsphere.local", "vms": [ "vm-2427" ] } { "succeeded": {}, "message": "no opaque ref found in undefined", "name": "Error", "stack": "Error: no opaque ref found in undefined at importVm (file:///opt/xo/xo-builds/xen-orchestra-202402050455/@xen-orchestra/xva/importVm.mjs:28:19) at processTicksAndRejections (node:internal/process/task_queues:95:5) at importVdi (file:///opt/xo/xo-builds/xen-orchestra-202402050455/@xen-orchestra/xva/importVdi.mjs:6:17) at file:///opt/xo/xo-builds/xen-orchestra-202402050455/packages/xo-server/src/xo-mixins/migrate-vm.mjs:260:21 at Task.runInside (/opt/xo/xo-builds/xen-orchestra-202402050455/@vates/task/index.js:158:22) at Task.run (/opt/xo/xo-builds/xen-orchestra-202402050455/@vates/task/index.js:141:20)" }
and from journalctl:
Feb 05 05:12:04 xoa-fs xo-server[32410]: 2024-02-05T10:12:04.864Z xo:xo-server WARN possibly unhandled rejection { Feb 05 05:12:04 xoa-fs xo-server[32410]: error: Error: already finalized or destroyed Feb 05 05:12:04 xoa-fs xo-server[32410]: at Pack.entry (/opt/xo/xo-builds/xen-orchestra-202402050455/node_modules/tar-stream/pack.js:138:51) Feb 05 05:12:04 xoa-fs xo-server[32410]: at Pack.resolver (/opt/xo/xo-builds/xen-orchestra-202402050455/node_modules/promise-toolbox/fromCallback.js:5:6) Feb 05 05:12:04 xoa-fs xo-server[32410]: at Promise._execute (/opt/xo/xo-builds/xen-orchestra-202402050455/node_modules/bluebird/js/release/debuggability.js:384:9) Feb 05 05:12:04 xoa-fs xo-server[32410]: at Promise._resolveFromExecutor (/opt/xo/xo-builds/xen-orchestra-202402050455/node_modules/bluebird/js/release/promise.js:518:18) Feb 05 05:12:04 xoa-fs xo-server[32410]: at new Promise (/opt/xo/xo-builds/xen-orchestra-202402050455/node_modules/bluebird/js/release/promise.js:103:10) Feb 05 05:12:04 xoa-fs xo-server[32410]: at Pack.fromCallback (/opt/xo/xo-builds/xen-orchestra-202402050455/node_modules/promise-toolbox/fromCallback.js:9:10) Feb 05 05:12:04 xoa-fs xo-server[32410]: at writeBlock (file:///opt/xo/xo-builds/xen-orchestra-202402050455/@xen-orchestra/xva/_writeDisk.mjs:15:22) Feb 05 05:12:04 xoa-fs xo-server[32410]: } Feb 05 05:12:06 xoa-fs xo-server[32410]: root@10.96.22.111 Xapi#putResource /import/ XapiError: IMPORT_ERROR(INTERNAL_ERROR: [ Unix.Unix_error(Unix.ENOSPC, "write", "") ]) Feb 05 05:12:06 xoa-fs xo-server[32410]: at Function.wrap (file:///opt/xo/xo-builds/xen-orchestra-202402050455/packages/xen-api/_XapiError.mjs:16:12) Feb 05 05:12:06 xoa-fs xo-server[32410]: at default (file:///opt/xo/xo-builds/xen-orchestra-202402050455/packages/xen-api/_getTaskResult.mjs:11:29) Feb 05 05:12:06 xoa-fs xo-server[32410]: at Xapi._addRecordToCache (file:///opt/xo/xo-builds/xen-orchestra-202402050455/packages/xen-api/index.mjs:1006:24) Feb 05 05:12:06 xoa-fs xo-server[32410]: at file:///opt/xo/xo-builds/xen-orchestra-202402050455/packages/xen-api/index.mjs:1040:14 Feb 05 05:12:06 xoa-fs xo-server[32410]: at Array.forEach (<anonymous>) Feb 05 05:12:06 xoa-fs xo-server[32410]: at Xapi._processEvents (file:///opt/xo/xo-builds/xen-orchestra-202402050455/packages/xen-api/index.mjs:1030:12) Feb 05 05:12:06 xoa-fs xo-server[32410]: at Xapi._watchEvents (file:///opt/xo/xo-builds/xen-orchestra-202402050455/packages/xen-api/index.mjs:1203:14) { Feb 05 05:12:06 xoa-fs xo-server[32410]: code: 'IMPORT_ERROR', Feb 05 05:12:06 xoa-fs xo-server[32410]: params: [ 'INTERNAL_ERROR: [ Unix.Unix_error(Unix.ENOSPC, "write", "") ]' ], Feb 05 05:12:06 xoa-fs xo-server[32410]: call: undefined, Feb 05 05:12:06 xoa-fs xo-server[32410]: url: undefined, Feb 05 05:12:06 xoa-fs xo-server[32410]: task: task { Feb 05 05:12:06 xoa-fs xo-server[32410]: uuid: '0f812914-46c0-fe29-d563-1af7bca72d96', Feb 05 05:12:06 xoa-fs xo-server[32410]: name_label: '[XO] VM import', Feb 05 05:12:06 xoa-fs xo-server[32410]: name_description: '', Feb 05 05:12:06 xoa-fs xo-server[32410]: allowed_operations: [], Feb 05 05:12:06 xoa-fs xo-server[32410]: current_operations: {}, Feb 05 05:12:06 xoa-fs xo-server[32410]: created: '20240205T10:07:04Z', Feb 05 05:12:06 xoa-fs xo-server[32410]: finished: '20240205T10:12:06Z', Feb 05 05:12:06 xoa-fs xo-server[32410]: status: 'failure', Feb 05 05:12:06 xoa-fs xo-server[32410]: resident_on: 'OpaqueRef:85a049dc-296e-4ef0-bdbc-82e2845ecd68', Feb 05 05:12:06 xoa-fs xo-server[32410]: progress: 1, Feb 05 05:12:06 xoa-fs xo-server[32410]: type: '<none/>', Feb 05 05:12:06 xoa-fs xo-server[32410]: result: '', Feb 05 05:12:06 xoa-fs xo-server[32410]: error_info: [ Feb 05 05:12:06 xoa-fs xo-server[32410]: 'IMPORT_ERROR', Feb 05 05:12:06 xoa-fs xo-server[32410]: 'INTERNAL_ERROR: [ Unix.Unix_error(Unix.ENOSPC, "write", "") ]' Feb 05 05:12:06 xoa-fs xo-server[32410]: ], Feb 05 05:12:06 xoa-fs xo-server[32410]: other_config: { object_creation: 'complete' }, Feb 05 05:12:06 xoa-fs xo-server[32410]: subtask_of: 'OpaqueRef:NULL', Feb 05 05:12:06 xoa-fs xo-server[32410]: subtasks: [], Feb 05 05:12:06 xoa-fs xo-server[32410]: backtrace: '(((process xapi)(filename lib/backtrace.ml)(line 210))((process xapi)(filename ocaml/xapi/import.ml)(line 2021))((process xapi)(filename ocaml/xapi/server_> Feb 05 05:12:06 xoa-fs xo-server[32410]: } Feb 05 05:12:06 xoa-fs xo-server[32410]: } Feb 05 05:12:06 xoa-fs xo-server[32410]: 2024-02-05T10:12:06.930Z xo:api WARN admin@admin.net | vm.importMultipleFromEsxi(...) [5m] =!> Error: no opaque ref found in undefined
-
I patched my XO source VM with the latest from 5th Feb and still had the same error.
"stack": "Error: no opaque ref found in undefinedIt may be I am not patching correctly so I have added a XOA trial and moved to the 'latest' channel and have ping @florent with a support tunnel to test in the morning.
-
@acomav you're up to date on your XOA
I pushed a new commit , fixing an async condition on the fix_xva_import_thin branch . Feel free to test on your XO from source.
-
Thank you @florent for all your help! We got the VM to import now, I will try the other failed VM outside business hours but I expect it will work now as well!
-
@rmaclachlan said in Import from VMware fails after upgrade to XOA 5.91:
Thank you @florent for all your help! We got the VM to import now, I will try the other failed VM outside business hours but I expect it will work now as well!
thank you for the help
-
@florent Nice! This latest change allowed my migration to complete successfully. Seems like the peak transfer speed was about 70Mbps. 4.77GB in 5 minutes. I'm guessing the thin/zeros made this so fast?
-
@jasonmap said in Import from VMware fails after upgrade to XOA 5.91:
@florent Nice! This latest change allowed my migration to complete successfully. Seems like the peak transfer speed was about 70Mbps. 4.77GB in 5 minutes. I'm guessing the thin/zeros made this so fast?
yay
the thin make it fast(espcially since it only need one pass instead of two in the previous api), XCP is a little faster to load xva , and there is some magic . No secret though, everything is done in publicwe invested a lot of time and energy to make it work fast, and we have more in pipeline, to make it work in more case ( vsan I am looking at you) or to access more easy the content of running VM
-
Just a quick update - I imported a handful of VMs today and was even able to move over the VM that failed on the weekend so I think that patch works @florent
-
@florent
Thanks. I have kicked off an Import but it takes 2 hours however....the first small virtual disk has now been successful whereas it was failing, so I am confident the rest will work. Will update then.Thanks
-
-
Hi, a question about these patches and thin provisioning.
My test import now works, however, it fully provisioned the full size of the disk on an NFS SR.
[root@XXXX ~]# ls -salh /mnt/NFS/d8ad046d-c279-5bd6-8ed7-43888187f188/ total 540G 4.0K drwxr-xr-x 2 root root 4.0K Feb 6 09:33 . 4.0K drwxr-xr-x 27 root root 4.0K Feb 1 21:22 .. 151G -rw-r--r-- 1 root root 151G Feb 6 10:45 1c3b93da-de07-4a4f-8229-60635bc2f279.vhd 13G -rw-r--r-- 1 root root 13G Feb 6 09:43 1eae9130-e6eb-45be-ae25-a7dcb7ee8f4e.vhd 171G -rw-r--r-- 1 root root 171G Feb 6 10:51 751b7a5f-df32-4cb1-9479-e196671e7149.vhd
The two large disks are in an LVM VG on the source and combined, use up 253 GB of the 320 GB LV. They are thin provisioned on the VMware side.
Am I wrong to expect the vhd files on the NFS SR to be smaller than what I see? Does LVM on the source negate thin provisioning on the xcp-ng side?
Not a big deal, I am just curious.
Thanks
-
thank you all, now time to do a patch release
@acomav said in Import from VMware fails after upgrade to XOA 5.91:
Hi, a question about these patches and thin provisioning.
My test import now works, however, it fully provisioned the full size of the disk on an NFS SR.
[root@XXXX ~]# ls -salh /mnt/NFS/d8ad046d-c279-5bd6-8ed7-43888187f188/ total 540G 4.0K drwxr-xr-x 2 root root 4.0K Feb 6 09:33 . 4.0K drwxr-xr-x 27 root root 4.0K Feb 1 21:22 .. 151G -rw-r--r-- 1 root root 151G Feb 6 10:45 1c3b93da-de07-4a4f-8229-60635bc2f279.vhd 13G -rw-r--r-- 1 root root 13G Feb 6 09:43 1eae9130-e6eb-45be-ae25-a7dcb7ee8f4e.vhd 171G -rw-r--r-- 1 root root 171G Feb 6 10:51 751b7a5f-df32-4cb1-9479-e196671e7149.vhd
The two large disks are in an LVM VG on the source and combined, use up 253 GB of the 320 GB LV. They are thin provisioned on the VMware side.
Am I wrong to expect the vhd files on the NFS SR to be smaller than what I see? Does LVM on the source negate thin provisioning on the xcp-ng side?
Not a big deal, I am just curious.
Thanks
LVM is thick provisionned on XCP side : https://xcp-ng.org/docs/storage.html#storage-types
-
@acomav How big are you original VM disks? (eg the total disk size on VMware)
-
@florent The VM is on an NFS SR which is thin provisioned. LVM is inside the VM on the virtual disks.
-
@olivierlambert Hi.
The disk sizes (and vmdk file size) are 150GB and 170GB. Both are in a Volume group and one Logical Volume using 100% of the Volume group mounted using XFS.Disk space in use is 81%:
# pvs PV VG Fmt Attr PSize PFree /dev/sda2 centos lvm2 a-- <15.51g 0 /dev/sdb VolGroup01 lvm2 a-- <150.00g 0 /dev/sdc VolGroup01 lvm2 a-- <170.00g 0 # vgs VG #PV #LV #SN Attr VSize VFree VolGroup01 2 1 0 wz--n- 319.99g 0 centos 1 2 0 wz--n- <15.51g 0 # lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert IMAPSpool VolGroup01 -wi-ao---- 319.99g # df -h /dev/mapper/VolGroup01-IMAPSpool 320G 257G 64G 81% /var/spool/imap
The vmdk files live on an HPE/Nimble CS3000 (Block iscsi). I am now thinking I will need to get into the VM and free up discarded/deleted blocks....which would make the vmdk sizes smaller. (as they are set to thin provisioned with vmfs)
I'll do that and retry and report back if I see the the full disk being written out to XCP-NG.