XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    Import from VMware fails after upgrade to XOA 5.91

    Scheduled Pinned Locked Moved Xen Orchestra
    64 Posts 8 Posters 14.2k Views 9 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • florentF Offline
      florent Vates 🪐 XO Team @archw
      last edited by florent

      @khicks : that is a great new

      @archw @jasonmap @rmaclachlan I pushed a new commit to the branch fix_xva_import_thin , alignining the last block to exactly 1MB . Could you test if the imports are working now ?

      For those who have an XOA and want to help, please open a tunnel and send me the tunnel yb chat ( not directly in this topic) , and I will patch your appliance

      FOr those who use XO from the source , you'll need to change branch

      J 1 Reply Last reply Reply Quote 0
      • J Offline
        jasonmap @florent
        last edited by

        @florent

        Over the weekend I spun up an XO instance from source. This morning I changed to 'fix_xva_import_thin' after your post. Unfortunately, still the same failure for me. Only notable difference I see is that your ${str} addition for the log now comes back as " undefined"

        Here are my logs:

        From XO:

        vm.importMultipleFromEsxi
        {
          "concurrency": 2,
          "host": "vsphere.nest.local",
          "network": "7f7d2fcc-c78b-b1c9-101a-0ca9570e3462",
          "password": "* obfuscated *",
          "sr": "50d8f945-8ae4-dd87-0149-e6054a10d51f",
          "sslVerify": false,
          "stopOnError": true,
          "stopSource": true,
          "user": "administrator@vsphere.local",
          "vms": [
            "vm-2427"
          ]
        }
        {
          "succeeded": {},
          "message": "no opaque ref found in  undefined",
          "name": "Error",
          "stack": "Error: no opaque ref found in  undefined
            at importVm (file:///opt/xo/xo-builds/xen-orchestra-202402050455/@xen-orchestra/xva/importVm.mjs:28:19)
            at processTicksAndRejections (node:internal/process/task_queues:95:5)
            at importVdi (file:///opt/xo/xo-builds/xen-orchestra-202402050455/@xen-orchestra/xva/importVdi.mjs:6:17)
            at file:///opt/xo/xo-builds/xen-orchestra-202402050455/packages/xo-server/src/xo-mixins/migrate-vm.mjs:260:21
            at Task.runInside (/opt/xo/xo-builds/xen-orchestra-202402050455/@vates/task/index.js:158:22)
            at Task.run (/opt/xo/xo-builds/xen-orchestra-202402050455/@vates/task/index.js:141:20)"
        }
        

        and from journalctl:

        Feb 05 05:12:04 xoa-fs xo-server[32410]: 2024-02-05T10:12:04.864Z xo:xo-server WARN possibly unhandled rejection {
        Feb 05 05:12:04 xoa-fs xo-server[32410]:   error: Error: already finalized or destroyed
        Feb 05 05:12:04 xoa-fs xo-server[32410]:       at Pack.entry (/opt/xo/xo-builds/xen-orchestra-202402050455/node_modules/tar-stream/pack.js:138:51)
        Feb 05 05:12:04 xoa-fs xo-server[32410]:       at Pack.resolver (/opt/xo/xo-builds/xen-orchestra-202402050455/node_modules/promise-toolbox/fromCallback.js:5:6)
        Feb 05 05:12:04 xoa-fs xo-server[32410]:       at Promise._execute (/opt/xo/xo-builds/xen-orchestra-202402050455/node_modules/bluebird/js/release/debuggability.js:384:9)
        Feb 05 05:12:04 xoa-fs xo-server[32410]:       at Promise._resolveFromExecutor (/opt/xo/xo-builds/xen-orchestra-202402050455/node_modules/bluebird/js/release/promise.js:518:18)
        Feb 05 05:12:04 xoa-fs xo-server[32410]:       at new Promise (/opt/xo/xo-builds/xen-orchestra-202402050455/node_modules/bluebird/js/release/promise.js:103:10)
        Feb 05 05:12:04 xoa-fs xo-server[32410]:       at Pack.fromCallback (/opt/xo/xo-builds/xen-orchestra-202402050455/node_modules/promise-toolbox/fromCallback.js:9:10)
        Feb 05 05:12:04 xoa-fs xo-server[32410]:       at writeBlock (file:///opt/xo/xo-builds/xen-orchestra-202402050455/@xen-orchestra/xva/_writeDisk.mjs:15:22)
        Feb 05 05:12:04 xoa-fs xo-server[32410]: }
        Feb 05 05:12:06 xoa-fs xo-server[32410]: root@10.96.22.111 Xapi#putResource /import/ XapiError: IMPORT_ERROR(INTERNAL_ERROR: [ Unix.Unix_error(Unix.ENOSPC, "write", "") ])
        Feb 05 05:12:06 xoa-fs xo-server[32410]:     at Function.wrap (file:///opt/xo/xo-builds/xen-orchestra-202402050455/packages/xen-api/_XapiError.mjs:16:12)
        Feb 05 05:12:06 xoa-fs xo-server[32410]:     at default (file:///opt/xo/xo-builds/xen-orchestra-202402050455/packages/xen-api/_getTaskResult.mjs:11:29)
        Feb 05 05:12:06 xoa-fs xo-server[32410]:     at Xapi._addRecordToCache (file:///opt/xo/xo-builds/xen-orchestra-202402050455/packages/xen-api/index.mjs:1006:24)
        Feb 05 05:12:06 xoa-fs xo-server[32410]:     at file:///opt/xo/xo-builds/xen-orchestra-202402050455/packages/xen-api/index.mjs:1040:14
        Feb 05 05:12:06 xoa-fs xo-server[32410]:     at Array.forEach (<anonymous>)
        Feb 05 05:12:06 xoa-fs xo-server[32410]:     at Xapi._processEvents (file:///opt/xo/xo-builds/xen-orchestra-202402050455/packages/xen-api/index.mjs:1030:12)
        Feb 05 05:12:06 xoa-fs xo-server[32410]:     at Xapi._watchEvents (file:///opt/xo/xo-builds/xen-orchestra-202402050455/packages/xen-api/index.mjs:1203:14) {
        Feb 05 05:12:06 xoa-fs xo-server[32410]:   code: 'IMPORT_ERROR',
        Feb 05 05:12:06 xoa-fs xo-server[32410]:   params: [ 'INTERNAL_ERROR: [ Unix.Unix_error(Unix.ENOSPC, "write", "") ]' ],
        Feb 05 05:12:06 xoa-fs xo-server[32410]:   call: undefined,
        Feb 05 05:12:06 xoa-fs xo-server[32410]:   url: undefined,
        Feb 05 05:12:06 xoa-fs xo-server[32410]:   task: task {
        Feb 05 05:12:06 xoa-fs xo-server[32410]:     uuid: '0f812914-46c0-fe29-d563-1af7bca72d96',
        Feb 05 05:12:06 xoa-fs xo-server[32410]:     name_label: '[XO] VM import',
        Feb 05 05:12:06 xoa-fs xo-server[32410]:     name_description: '',
        Feb 05 05:12:06 xoa-fs xo-server[32410]:     allowed_operations: [],
        Feb 05 05:12:06 xoa-fs xo-server[32410]:     current_operations: {},
        Feb 05 05:12:06 xoa-fs xo-server[32410]:     created: '20240205T10:07:04Z',
        Feb 05 05:12:06 xoa-fs xo-server[32410]:     finished: '20240205T10:12:06Z',
        Feb 05 05:12:06 xoa-fs xo-server[32410]:     status: 'failure',
        Feb 05 05:12:06 xoa-fs xo-server[32410]:     resident_on: 'OpaqueRef:85a049dc-296e-4ef0-bdbc-82e2845ecd68',
        Feb 05 05:12:06 xoa-fs xo-server[32410]:     progress: 1,
        Feb 05 05:12:06 xoa-fs xo-server[32410]:     type: '<none/>',
        Feb 05 05:12:06 xoa-fs xo-server[32410]:     result: '',
        Feb 05 05:12:06 xoa-fs xo-server[32410]:     error_info: [
        Feb 05 05:12:06 xoa-fs xo-server[32410]:       'IMPORT_ERROR',
        Feb 05 05:12:06 xoa-fs xo-server[32410]:       'INTERNAL_ERROR: [ Unix.Unix_error(Unix.ENOSPC, "write", "") ]'
        Feb 05 05:12:06 xoa-fs xo-server[32410]:     ],
        Feb 05 05:12:06 xoa-fs xo-server[32410]:     other_config: { object_creation: 'complete' },
        Feb 05 05:12:06 xoa-fs xo-server[32410]:     subtask_of: 'OpaqueRef:NULL',
        Feb 05 05:12:06 xoa-fs xo-server[32410]:     subtasks: [],
        Feb 05 05:12:06 xoa-fs xo-server[32410]:     backtrace: '(((process xapi)(filename lib/backtrace.ml)(line 210))((process xapi)(filename ocaml/xapi/import.ml)(line 2021))((process xapi)(filename ocaml/xapi/server_>
        Feb 05 05:12:06 xoa-fs xo-server[32410]:   }
        Feb 05 05:12:06 xoa-fs xo-server[32410]: }
        Feb 05 05:12:06 xoa-fs xo-server[32410]: 2024-02-05T10:12:06.930Z xo:api WARN admin@admin.net | vm.importMultipleFromEsxi(...) [5m] =!> Error: no opaque ref found in  undefined
        
        1 Reply Last reply Reply Quote 0
        • A Offline
          acomav
          last edited by

          I patched my XO source VM with the latest from 5th Feb and still had the same error.
          "stack": "Error: no opaque ref found in undefined

          It may be I am not patching correctly so I have added a XOA trial and moved to the 'latest' channel and have ping @florent with a support tunnel to test in the morning.

          florentF 1 Reply Last reply Reply Quote 0
          • florentF Offline
            florent Vates 🪐 XO Team @acomav
            last edited by

            @acomav you're up to date on your XOA

            I pushed a new commit , fixing an async condition on the fix_xva_import_thin branch . Feel free to test on your XO from source.

            J A 2 Replies Last reply Reply Quote 2
            • R Offline
              rmaclachlan
              last edited by

              Thank you @florent for all your help! We got the VM to import now, I will try the other failed VM outside business hours but I expect it will work now as well!

              florentF 1 Reply Last reply Reply Quote 1
              • florentF Offline
                florent Vates 🪐 XO Team @rmaclachlan
                last edited by

                @rmaclachlan said in Import from VMware fails after upgrade to XOA 5.91:

                Thank you @florent for all your help! We got the VM to import now, I will try the other failed VM outside business hours but I expect it will work now as well!

                thank you for the help

                1 Reply Last reply Reply Quote 0
                • J Offline
                  jasonmap @florent
                  last edited by

                  @florent Nice! This latest change allowed my migration to complete successfully. Seems like the peak transfer speed was about 70Mbps. 4.77GB in 5 minutes. I'm guessing the thin/zeros made this so fast?

                  florentF 1 Reply Last reply Reply Quote 0
                  • florentF Offline
                    florent Vates 🪐 XO Team @jasonmap
                    last edited by florent

                    @jasonmap said in Import from VMware fails after upgrade to XOA 5.91:

                    @florent Nice! This latest change allowed my migration to complete successfully. Seems like the peak transfer speed was about 70Mbps. 4.77GB in 5 minutes. I'm guessing the thin/zeros made this so fast?

                    yay
                    the thin make it fast(espcially since it only need one pass instead of two in the previous api), XCP is a little faster to load xva , and there is some magic . No secret though, everything is done in public

                    we invested a lot of time and energy to make it work fast, and we have more in pipeline, to make it work in more case ( vsan I am looking at you) or to access more easy the content of running VM

                    1 Reply Last reply Reply Quote 0
                    • R Offline
                      rmaclachlan
                      last edited by

                      Just a quick update - I imported a handful of VMs today and was even able to move over the VM that failed on the weekend so I think that patch works @florent

                      1 Reply Last reply Reply Quote 0
                      • A Offline
                        acomav @florent
                        last edited by

                        @florent
                        Thanks. I have kicked off an Import but it takes 2 hours however....the first small virtual disk has now been successful whereas it was failing, so I am confident the rest will work. Will update then.

                        Thanks

                        A 1 Reply Last reply Reply Quote 0
                        • A Offline
                          acomav @acomav
                          last edited by

                          @acomav

                          Import completed. Great work @florent.

                          1 Reply Last reply Reply Quote 1
                          • A Offline
                            acomav
                            last edited by

                            Hi, a question about these patches and thin provisioning.

                            My test import now works, however, it fully provisioned the full size of the disk on an NFS SR.

                            [root@XXXX ~]# ls -salh /mnt/NFS/d8ad046d-c279-5bd6-8ed7-43888187f188/
                            total 540G
                            4.0K drwxr-xr-x  2 root root 4.0K Feb  6 09:33 .
                            4.0K drwxr-xr-x 27 root root 4.0K Feb  1 21:22 ..
                            151G -rw-r--r--  1 root root 151G Feb  6 10:45 1c3b93da-de07-4a4f-8229-60635bc2f279.vhd
                             13G -rw-r--r--  1 root root  13G Feb  6 09:43 1eae9130-e6eb-45be-ae25-a7dcb7ee8f4e.vhd
                            171G -rw-r--r--  1 root root 171G Feb  6 10:51 751b7a5f-df32-4cb1-9479-e196671e7149.vhd
                            

                            The two large disks are in an LVM VG on the source and combined, use up 253 GB of the 320 GB LV. They are thin provisioned on the VMware side.

                            Am I wrong to expect the vhd files on the NFS SR to be smaller than what I see? Does LVM on the source negate thin provisioning on the xcp-ng side?

                            Not a big deal, I am just curious.

                            Thanks

                            florentF olivierlambertO 2 Replies Last reply Reply Quote 0
                            • florentF Offline
                              florent Vates 🪐 XO Team @acomav
                              last edited by

                              thank you all, now time to do a patch release

                              @acomav said in Import from VMware fails after upgrade to XOA 5.91:

                              Hi, a question about these patches and thin provisioning.

                              My test import now works, however, it fully provisioned the full size of the disk on an NFS SR.

                              [root@XXXX ~]# ls -salh /mnt/NFS/d8ad046d-c279-5bd6-8ed7-43888187f188/
                              total 540G
                              4.0K drwxr-xr-x  2 root root 4.0K Feb  6 09:33 .
                              4.0K drwxr-xr-x 27 root root 4.0K Feb  1 21:22 ..
                              151G -rw-r--r--  1 root root 151G Feb  6 10:45 1c3b93da-de07-4a4f-8229-60635bc2f279.vhd
                               13G -rw-r--r--  1 root root  13G Feb  6 09:43 1eae9130-e6eb-45be-ae25-a7dcb7ee8f4e.vhd
                              171G -rw-r--r--  1 root root 171G Feb  6 10:51 751b7a5f-df32-4cb1-9479-e196671e7149.vhd
                              

                              The two large disks are in an LVM VG on the source and combined, use up 253 GB of the 320 GB LV. They are thin provisioned on the VMware side.

                              Am I wrong to expect the vhd files on the NFS SR to be smaller than what I see? Does LVM on the source negate thin provisioning on the xcp-ng side?

                              Not a big deal, I am just curious.

                              Thanks

                              LVM is thick provisionned on XCP side : https://xcp-ng.org/docs/storage.html#storage-types

                              A 1 Reply Last reply Reply Quote 0
                              • olivierlambertO Offline
                                olivierlambert Vates 🪐 Co-Founder CEO @acomav
                                last edited by

                                @acomav How big are you original VM disks? (eg the total disk size on VMware)

                                A 1 Reply Last reply Reply Quote 0
                                • A Offline
                                  acomav @florent
                                  last edited by

                                  @florent The VM is on an NFS SR which is thin provisioned. LVM is inside the VM on the virtual disks.

                                  1 Reply Last reply Reply Quote 0
                                  • A Offline
                                    acomav @olivierlambert
                                    last edited by

                                    @olivierlambert Hi.
                                    The disk sizes (and vmdk file size) are 150GB and 170GB. Both are in a Volume group and one Logical Volume using 100% of the Volume group mounted using XFS.

                                    Disk space in use is 81%:

                                    # pvs
                                      PV         VG         Fmt  Attr PSize    PFree
                                      /dev/sda2  centos     lvm2 a--   <15.51g    0 
                                      /dev/sdb   VolGroup01 lvm2 a--  <150.00g    0 
                                      /dev/sdc   VolGroup01 lvm2 a--  <170.00g    0 
                                    
                                    # vgs
                                      VG         #PV #LV #SN Attr   VSize   VFree
                                      VolGroup01   2   1   0 wz--n- 319.99g    0 
                                      centos       1   2   0 wz--n- <15.51g    0
                                    
                                    # lvs
                                      LV        VG         Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
                                      IMAPSpool VolGroup01 -wi-ao---- 319.99g 
                                    
                                    # df -h
                                    /dev/mapper/VolGroup01-IMAPSpool  320G  257G   64G  81% /var/spool/imap
                                    

                                    The vmdk files live on an HPE/Nimble CS3000 (Block iscsi). I am now thinking I will need to get into the VM and free up discarded/deleted blocks....which would make the vmdk sizes smaller. (as they are set to thin provisioned with vmfs)
                                    I'll do that and retry and report back if I see the the full disk being written out to XCP-NG.

                                    1 Reply Last reply Reply Quote 1
                                    • olivierlambertO Offline
                                      olivierlambert Vates 🪐 Co-Founder CEO
                                      last edited by

                                      Good idea, thanks 🙂

                                      A 1 Reply Last reply Reply Quote 0
                                      • A Offline
                                        acomav @olivierlambert
                                        last edited by

                                        @olivierlambert
                                        I can confirm it was my side. I had to do a few things to get the VMware Virtual disks to free up empty space and once I did, the VM Import to XCP-NG to an NFS SR successfully copied the virtual disk in a thin mode.
                                        For anyone reading this who will be preparing to jump ship off VMware.

                                        I am using vSphere 6.7. I have not tested against vSphere 7 yet. Not bothering with vSphere 8 for obvious reasons. My VM was a CentOS 7 VM with LVM to manage the 3 virtual disks.

                                        1. Make sure you Virtual Hardware is at least version 11. My test VM was a very old one still on version 8.
                                        2. For the ESXi host the VM lives on (but you should probably go all hosts in the cluster), go into Advanced settings, and enable (change 0 to 1) VMFS3.EnableBlockDelete. I thought I had this enabled but only 2 of the 5 hosts in the cluster did. You may need to check this is not reset after updates.
                                        3. Due to using CentOS 7 (perhaps) I could not used 'fstrim' with the discard mount option. It was not supported. I filled up the diskspace with zeros, synced, and then removed the zeroes.
                                        # cd /mount point; dd if=/dev/zero of=./zeroes bs=1M count=1024; sync; rm zeroes
                                        

                                        Change count=1024 (Which will create 1 GB of zeroes in a file) to however big a file you require to nearly fill up the partition / volume. eg count=10240 will make a 10 GB file.
                                        Windows users can use 'sdelete'.

                                        I could have waited for vSphere to automatically clean up the datastore in the background at this stage, but I was impatient and 'storage motioned' the virtual disks to NFS storage in Thin mode. I confirmed only the used space was copied across. I then migrated the disks back to my HP Nimble SAN and retained thin provisioning.

                                        1 Reply Last reply Reply Quote 1
                                        • olivierlambertO Offline
                                          olivierlambert Vates 🪐 Co-Founder CEO
                                          last edited by

                                          Great news and also great feedback! I think we'll add this in our guide 🙂

                                          1 Reply Last reply Reply Quote 0
                                          • First post
                                            Last post