XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    Import from VMware fails after upgrade to XOA 5.91

    Scheduled Pinned Locked Moved Xen Orchestra
    64 Posts 8 Posters 14.2k Views 9 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • olivierlambertO Offline
      olivierlambert Vates 🪐 Co-Founder CEO
      last edited by

      No problem, keep us posted!

      K 1 Reply Last reply Reply Quote 0
      • K Offline
        khicks @olivierlambert
        last edited by

        No luck on latest master commit 5d4723e I'm afraid. Going to try fix_xva_import_thin next.

        {
          "id": "0ls6joz2k",
          "properties": {
            "name": "importing vms 30",
            "userId": "f424aa77-c82d-432a-9228-709a614019e6",
            "total": 1,
            "done": 0,
            "progress": 0
          },
          "start": 1706993226381,
          "status": "failure",
          "updatedAt": 1706994410318,
          "tasks": [
            {
              "id": "8ggmvpx0fno",
              "properties": {
                "name": "importing vm 30",
                "done": 1,
                "progress": 100
              },
              "start": 1706993226382,
              "status": "failure",
              "tasks": [
                {
                  "id": "juhjiy9w6y7",
                  "properties": {
                    "name": "connecting to my-notsoawesome-vmware-server.mydomain.net"
                  },
                  "start": 1706993226383,
                  "status": "success",
                  "end": 1706993226546,
                  "result": {
                    "_events": {},
                    "_eventsCount": 0
                  }
                },
                {
                  "id": "79wfj2zidiu",
                  "properties": {
                    "name": "get metadata of 30"
                  },
                  "start": 1706993226547,
                  "status": "success",
                  "end": 1706993226851,
                  "result": {
                    "name_label": "my-awesome-vm",
                    "memory": 2147483648,
                    "nCpus": 2,
                    "guestToolsInstalled": false,
                    "firmware": "uefi",
                    "powerState": "poweredOff",
                    "disks": [
                      {
                        "capacity": 64424509440,
                        "isFull": true,
                        "uid": "94dc7d43",
                        "fileName": "my-awesome-vm-flat.vmdk",
                        "parentId": "ffffffff",
                        "vmdFormat": "VMFS",
                        "nameLabel": "my-awesome-vm-flat.vmdk",
                        "datastore": "my-awesome-vmware-nfs-datastore",
                        "path": "my-awesome-vm",
                        "descriptionLabel": " from esxi",
                        "node": "scsi0:0"
                      }
                    ],
                    "networks": [
                      {
                        "label": "my-awesome-network",
                        "macAddress": "00:50:56:9f:8a:cc",
                        "isGenerated": false
                      }
                    ]
                  }
                },
                {
                  "id": "soh8ame3hok",
                  "properties": {
                    "name": "build disks and snapshots chains for 30"
                  },
                  "start": 1706993226852,
                  "status": "success",
                  "end": 1706993226852,
                  "result": {
                    "scsi0:0": [
                      {
                        "capacity": 64424509440,
                        "isFull": true,
                        "uid": "94dc7d43",
                        "fileName": "my-awesome-vm-flat.vmdk",
                        "parentId": "ffffffff",
                        "vmdFormat": "VMFS",
                        "nameLabel": "my-awesome-vm-flat.vmdk",
                        "datastore": "my-awesome-vmware-nfs-datastore",
                        "path": "my-awesome-vm",
                        "descriptionLabel": " from esxi",
                        "node": "scsi0:0"
                      }
                    ]
                  }
                },
                {
                  "id": "wvybfjogex",
                  "properties": {
                    "name": "creating MV on XCP side"
                  },
                  "start": 1706993226852,
                  "status": "success",
                  "end": 1706993226954,
                  "result": {
                    "uuid": "a3dd5318-24ec-063b-3f98-9f66d50312e4",
                    "allowed_operations": [
                      "changing_NVRAM",
                      "changing_dynamic_range",
                      "changing_shadow_memory",
                      "changing_static_range",
                      "make_into_template",
                      "migrate_send",
                      "destroy",
                      "export",
                      "start_on",
                      "start",
                      "clone",
                      "copy",
                      "snapshot"
                    ],
                    "current_operations": {},
                    "name_label": "my-awesome-vm",
                    "name_description": "from esxi",
                    "power_state": "Halted",
                    "user_version": 1,
                    "is_a_template": false,
                    "is_default_template": false,
                    "suspend_VDI": "OpaqueRef:NULL",
                    "resident_on": "OpaqueRef:NULL",
                    "scheduled_to_be_resident_on": "OpaqueRef:NULL",
                    "affinity": "OpaqueRef:NULL",
                    "memory_overhead": 20971520,
                    "memory_target": 0,
                    "memory_static_max": 2147483648,
                    "memory_dynamic_max": 2147483648,
                    "memory_dynamic_min": 2147483648,
                    "memory_static_min": 2147483648,
                    "VCPUs_params": {},
                    "VCPUs_max": 2,
                    "VCPUs_at_startup": 2,
                    "actions_after_shutdown": "destroy",
                    "actions_after_reboot": "restart",
                    "actions_after_crash": "restart",
                    "consoles": [],
                    "VIFs": [],
                    "VBDs": [],
                    "VUSBs": [],
                    "crash_dumps": [],
                    "VTPMs": [],
                    "PV_bootloader": "",
                    "PV_kernel": "",
                    "PV_ramdisk": "",
                    "PV_args": "",
                    "PV_bootloader_args": "",
                    "PV_legacy_args": "",
                    "HVM_boot_policy": "BIOS order",
                    "HVM_boot_params": {
                      "order": "cdn"
                    },
                    "HVM_shadow_multiplier": 1,
                    "platform": {
                      "timeoffset": "0",
                      "nx": "true",
                      "acpi": "1",
                      "apic": "true",
                      "pae": "true",
                      "hpet": "true",
                      "viridian": "true"
                    },
                    "PCI_bus": "",
                    "other_config": {
                      "mac_seed": "22ad11b7-fb42-517f-91b1-1e834eb184af",
                      "vgpu_pci": "",
                      "base_template_name": "Other install media",
                      "install-methods": "cdrom"
                    },
                    "domid": -1,
                    "domarch": "",
                    "last_boot_CPU_flags": {},
                    "is_control_domain": false,
                    "metrics": "OpaqueRef:d405d6a0-178b-45b2-8a92-5db36d65a220",
                    "guest_metrics": "OpaqueRef:NULL",
                    "last_booted_record": "",
                    "recommendations": "<restrictions><restriction field=\"memory-static-max\" max=\"137438953472\" /><restriction field=\"vcpus-max\" max=\"32\" /><restriction property=\"number-of-vbds\" max=\"255\" /><restriction property=\"number-of-vifs\" max=\"7\" /><restriction field=\"has-vendor-device\" value=\"false\" /></restrictions>",
                    "xenstore_data": {},
                    "ha_always_run": false,
                    "ha_restart_priority": "",
                    "is_a_snapshot": false,
                    "snapshot_of": "OpaqueRef:NULL",
                    "snapshots": [],
                    "snapshot_time": "19700101T00:00:00Z",
                    "transportable_snapshot_id": "",
                    "blobs": {},
                    "tags": [],
                    "blocked_operations": {},
                    "snapshot_info": {},
                    "snapshot_metadata": "",
                    "parent": "OpaqueRef:NULL",
                    "children": [],
                    "bios_strings": {},
                    "protection_policy": "OpaqueRef:NULL",
                    "is_snapshot_from_vmpp": false,
                    "snapshot_schedule": "OpaqueRef:NULL",
                    "is_vmss_snapshot": false,
                    "appliance": "OpaqueRef:NULL",
                    "start_delay": 0,
                    "shutdown_delay": 0,
                    "order": 0,
                    "VGPUs": [],
                    "attached_PCIs": [],
                    "suspend_SR": "OpaqueRef:NULL",
                    "version": 0,
                    "generation_id": "0:0",
                    "hardware_platform_version": 0,
                    "has_vendor_device": false,
                    "requires_reboot": false,
                    "reference_label": "",
                    "domain_type": "hvm",
                    "NVRAM": {}
                  }
                },
                {
                  "id": "fx32r2hmr3w",
                  "properties": {
                    "name": "Cold import of disks scsi0:0"
                  },
                  "start": 1706993226955,
                  "status": "failure",
                  "end": 1706994410295,
                  "result": {
                    "message": "already finalized or destroyed",
                    "name": "Error",
                    "stack": "Error: already finalized or destroyed\n    at Pack.entry (/opt/xo/xo-builds/xen-orchestra-202402031433/node_modules/tar-stream/pack.js:138:51)\n    at Pack.resolver (/opt/xo/xo-builds/xen-orchestra-202402031433/node_modules/promise-toolbox/fromCallback.js:5:6)\n    at Promise._execute (/opt/xo/xo-builds/xen-orchestra-202402031433/node_modules/bluebird/js/release/debuggability.js:384:9)\n    at Promise._resolveFromExecutor (/opt/xo/xo-builds/xen-orchestra-202402031433/node_modules/bluebird/js/release/promise.js:518:18)\n    at new Promise (/opt/xo/xo-builds/xen-orchestra-202402031433/node_modules/bluebird/js/release/promise.js:103:10)\n    at Pack.fromCallback (/opt/xo/xo-builds/xen-orchestra-202402031433/node_modules/promise-toolbox/fromCallback.js:9:10)\n    at writeBlock (file:///opt/xo/xo-builds/xen-orchestra-202402031433/@xen-orchestra/xva/_writeDisk.mjs:6:22)\n    at addDisk (file:///opt/xo/xo-builds/xen-orchestra-202402031433/@xen-orchestra/xva/_writeDisk.mjs:27:13)\n    at importVm (file:///opt/xo/xo-builds/xen-orchestra-202402031433/@xen-orchestra/xva/importVm.mjs:22:5)\n    at importVdi (file:///opt/xo/xo-builds/xen-orchestra-202402031433/@xen-orchestra/xva/importVdi.mjs:6:17)\n    at file:///opt/xo/xo-builds/xen-orchestra-202402031433/packages/xo-server/src/xo-mixins/migrate-vm.mjs:260:21\n    at Task.runInside (/opt/xo/xo-builds/xen-orchestra-202402031433/@vates/task/index.js:158:22)\n    at Task.run (/opt/xo/xo-builds/xen-orchestra-202402031433/@vates/task/index.js:141:20)"
                  }
                }
              ],
              "end": 1706994410318,
              "result": {
                "message": "already finalized or destroyed",
                "name": "Error",
                "stack": "Error: already finalized or destroyed\n    at Pack.entry (/opt/xo/xo-builds/xen-orchestra-202402031433/node_modules/tar-stream/pack.js:138:51)\n    at Pack.resolver (/opt/xo/xo-builds/xen-orchestra-202402031433/node_modules/promise-toolbox/fromCallback.js:5:6)\n    at Promise._execute (/opt/xo/xo-builds/xen-orchestra-202402031433/node_modules/bluebird/js/release/debuggability.js:384:9)\n    at Promise._resolveFromExecutor (/opt/xo/xo-builds/xen-orchestra-202402031433/node_modules/bluebird/js/release/promise.js:518:18)\n    at new Promise (/opt/xo/xo-builds/xen-orchestra-202402031433/node_modules/bluebird/js/release/promise.js:103:10)\n    at Pack.fromCallback (/opt/xo/xo-builds/xen-orchestra-202402031433/node_modules/promise-toolbox/fromCallback.js:9:10)\n    at writeBlock (file:///opt/xo/xo-builds/xen-orchestra-202402031433/@xen-orchestra/xva/_writeDisk.mjs:6:22)\n    at addDisk (file:///opt/xo/xo-builds/xen-orchestra-202402031433/@xen-orchestra/xva/_writeDisk.mjs:27:13)\n    at importVm (file:///opt/xo/xo-builds/xen-orchestra-202402031433/@xen-orchestra/xva/importVm.mjs:22:5)\n    at importVdi (file:///opt/xo/xo-builds/xen-orchestra-202402031433/@xen-orchestra/xva/importVdi.mjs:6:17)\n    at file:///opt/xo/xo-builds/xen-orchestra-202402031433/packages/xo-server/src/xo-mixins/migrate-vm.mjs:260:21\n    at Task.runInside (/opt/xo/xo-builds/xen-orchestra-202402031433/@vates/task/index.js:158:22)\n    at Task.run (/opt/xo/xo-builds/xen-orchestra-202402031433/@vates/task/index.js:141:20)"
              }
            }
          ],
          "end": 1706994410318,
          "result": {
            "succeeded": {},
            "message": "already finalized or destroyed",
            "name": "Error",
            "stack": "Error: already finalized or destroyed\n    at Pack.entry (/opt/xo/xo-builds/xen-orchestra-202402031433/node_modules/tar-stream/pack.js:138:51)\n    at Pack.resolver (/opt/xo/xo-builds/xen-orchestra-202402031433/node_modules/promise-toolbox/fromCallback.js:5:6)\n    at Promise._execute (/opt/xo/xo-builds/xen-orchestra-202402031433/node_modules/bluebird/js/release/debuggability.js:384:9)\n    at Promise._resolveFromExecutor (/opt/xo/xo-builds/xen-orchestra-202402031433/node_modules/bluebird/js/release/promise.js:518:18)\n    at new Promise (/opt/xo/xo-builds/xen-orchestra-202402031433/node_modules/bluebird/js/release/promise.js:103:10)\n    at Pack.fromCallback (/opt/xo/xo-builds/xen-orchestra-202402031433/node_modules/promise-toolbox/fromCallback.js:9:10)\n    at writeBlock (file:///opt/xo/xo-builds/xen-orchestra-202402031433/@xen-orchestra/xva/_writeDisk.mjs:6:22)\n    at addDisk (file:///opt/xo/xo-builds/xen-orchestra-202402031433/@xen-orchestra/xva/_writeDisk.mjs:27:13)\n    at importVm (file:///opt/xo/xo-builds/xen-orchestra-202402031433/@xen-orchestra/xva/importVm.mjs:22:5)\n    at importVdi (file:///opt/xo/xo-builds/xen-orchestra-202402031433/@xen-orchestra/xva/importVdi.mjs:6:17)\n    at file:///opt/xo/xo-builds/xen-orchestra-202402031433/packages/xo-server/src/xo-mixins/migrate-vm.mjs:260:21\n    at Task.runInside (/opt/xo/xo-builds/xen-orchestra-202402031433/@vates/task/index.js:158:22)\n    at Task.run (/opt/xo/xo-builds/xen-orchestra-202402031433/@vates/task/index.js:141:20)"
          }
        }
        
        1 Reply Last reply Reply Quote 0
        • K Offline
          khicks
          last edited by

          Hooray, fix_xva_import_thin works! 🎉 Huge thanks to @florent for the fix!

          FWIW, jumping around between commits was a breeze thanks to that external tool. 😉

          1 Reply Last reply Reply Quote 1
          • olivierlambertO Offline
            olivierlambert Vates 🪐 Co-Founder CEO
            last edited by

            FYi, jumping into various commits is simply a git checkout <target commit> 😉

            1 Reply Last reply Reply Quote 0
            • A Offline
              archw @acomav
              last edited by archw

              @acomav said in Import from VMware fails after upgrade to XOA 5.91:

              redid the job with a snapshot from a running VM to a local SR. Same issue occurred at the same time.

              I too did the same thing; it didn't work for me either.

              He is what did work:

              1. Ran quick deploy
                https://xen-orchestra.com/#!/xoa
              2. It installs an older version (don't remember the version).
              3. I had it do one upgrade which takes it to XOA 5.90.
              4. Once it did that, it let me do the import from esxi.
              florentF 1 Reply Last reply Reply Quote 0
              • olivierlambertO Offline
                olivierlambert Vates 🪐 Co-Founder CEO
                last edited by

                5.90 is the current stable release channel, without the speed improvement (and the bug). We'll have a patch release for 5.91 (current latest) that will solve it 🙂

                A 1 Reply Last reply Reply Quote 0
                • A Offline
                  archw @olivierlambert
                  last edited by

                  @olivierlambert
                  Thanks!

                  1 Reply Last reply Reply Quote 0
                  • florentF Offline
                    florent Vates 🪐 XO Team @archw
                    last edited by florent

                    @khicks : that is a great new

                    @archw @jasonmap @rmaclachlan I pushed a new commit to the branch fix_xva_import_thin , alignining the last block to exactly 1MB . Could you test if the imports are working now ?

                    For those who have an XOA and want to help, please open a tunnel and send me the tunnel yb chat ( not directly in this topic) , and I will patch your appliance

                    FOr those who use XO from the source , you'll need to change branch

                    J 1 Reply Last reply Reply Quote 0
                    • J Offline
                      jasonmap @florent
                      last edited by

                      @florent

                      Over the weekend I spun up an XO instance from source. This morning I changed to 'fix_xva_import_thin' after your post. Unfortunately, still the same failure for me. Only notable difference I see is that your ${str} addition for the log now comes back as " undefined"

                      Here are my logs:

                      From XO:

                      vm.importMultipleFromEsxi
                      {
                        "concurrency": 2,
                        "host": "vsphere.nest.local",
                        "network": "7f7d2fcc-c78b-b1c9-101a-0ca9570e3462",
                        "password": "* obfuscated *",
                        "sr": "50d8f945-8ae4-dd87-0149-e6054a10d51f",
                        "sslVerify": false,
                        "stopOnError": true,
                        "stopSource": true,
                        "user": "administrator@vsphere.local",
                        "vms": [
                          "vm-2427"
                        ]
                      }
                      {
                        "succeeded": {},
                        "message": "no opaque ref found in  undefined",
                        "name": "Error",
                        "stack": "Error: no opaque ref found in  undefined
                          at importVm (file:///opt/xo/xo-builds/xen-orchestra-202402050455/@xen-orchestra/xva/importVm.mjs:28:19)
                          at processTicksAndRejections (node:internal/process/task_queues:95:5)
                          at importVdi (file:///opt/xo/xo-builds/xen-orchestra-202402050455/@xen-orchestra/xva/importVdi.mjs:6:17)
                          at file:///opt/xo/xo-builds/xen-orchestra-202402050455/packages/xo-server/src/xo-mixins/migrate-vm.mjs:260:21
                          at Task.runInside (/opt/xo/xo-builds/xen-orchestra-202402050455/@vates/task/index.js:158:22)
                          at Task.run (/opt/xo/xo-builds/xen-orchestra-202402050455/@vates/task/index.js:141:20)"
                      }
                      

                      and from journalctl:

                      Feb 05 05:12:04 xoa-fs xo-server[32410]: 2024-02-05T10:12:04.864Z xo:xo-server WARN possibly unhandled rejection {
                      Feb 05 05:12:04 xoa-fs xo-server[32410]:   error: Error: already finalized or destroyed
                      Feb 05 05:12:04 xoa-fs xo-server[32410]:       at Pack.entry (/opt/xo/xo-builds/xen-orchestra-202402050455/node_modules/tar-stream/pack.js:138:51)
                      Feb 05 05:12:04 xoa-fs xo-server[32410]:       at Pack.resolver (/opt/xo/xo-builds/xen-orchestra-202402050455/node_modules/promise-toolbox/fromCallback.js:5:6)
                      Feb 05 05:12:04 xoa-fs xo-server[32410]:       at Promise._execute (/opt/xo/xo-builds/xen-orchestra-202402050455/node_modules/bluebird/js/release/debuggability.js:384:9)
                      Feb 05 05:12:04 xoa-fs xo-server[32410]:       at Promise._resolveFromExecutor (/opt/xo/xo-builds/xen-orchestra-202402050455/node_modules/bluebird/js/release/promise.js:518:18)
                      Feb 05 05:12:04 xoa-fs xo-server[32410]:       at new Promise (/opt/xo/xo-builds/xen-orchestra-202402050455/node_modules/bluebird/js/release/promise.js:103:10)
                      Feb 05 05:12:04 xoa-fs xo-server[32410]:       at Pack.fromCallback (/opt/xo/xo-builds/xen-orchestra-202402050455/node_modules/promise-toolbox/fromCallback.js:9:10)
                      Feb 05 05:12:04 xoa-fs xo-server[32410]:       at writeBlock (file:///opt/xo/xo-builds/xen-orchestra-202402050455/@xen-orchestra/xva/_writeDisk.mjs:15:22)
                      Feb 05 05:12:04 xoa-fs xo-server[32410]: }
                      Feb 05 05:12:06 xoa-fs xo-server[32410]: root@10.96.22.111 Xapi#putResource /import/ XapiError: IMPORT_ERROR(INTERNAL_ERROR: [ Unix.Unix_error(Unix.ENOSPC, "write", "") ])
                      Feb 05 05:12:06 xoa-fs xo-server[32410]:     at Function.wrap (file:///opt/xo/xo-builds/xen-orchestra-202402050455/packages/xen-api/_XapiError.mjs:16:12)
                      Feb 05 05:12:06 xoa-fs xo-server[32410]:     at default (file:///opt/xo/xo-builds/xen-orchestra-202402050455/packages/xen-api/_getTaskResult.mjs:11:29)
                      Feb 05 05:12:06 xoa-fs xo-server[32410]:     at Xapi._addRecordToCache (file:///opt/xo/xo-builds/xen-orchestra-202402050455/packages/xen-api/index.mjs:1006:24)
                      Feb 05 05:12:06 xoa-fs xo-server[32410]:     at file:///opt/xo/xo-builds/xen-orchestra-202402050455/packages/xen-api/index.mjs:1040:14
                      Feb 05 05:12:06 xoa-fs xo-server[32410]:     at Array.forEach (<anonymous>)
                      Feb 05 05:12:06 xoa-fs xo-server[32410]:     at Xapi._processEvents (file:///opt/xo/xo-builds/xen-orchestra-202402050455/packages/xen-api/index.mjs:1030:12)
                      Feb 05 05:12:06 xoa-fs xo-server[32410]:     at Xapi._watchEvents (file:///opt/xo/xo-builds/xen-orchestra-202402050455/packages/xen-api/index.mjs:1203:14) {
                      Feb 05 05:12:06 xoa-fs xo-server[32410]:   code: 'IMPORT_ERROR',
                      Feb 05 05:12:06 xoa-fs xo-server[32410]:   params: [ 'INTERNAL_ERROR: [ Unix.Unix_error(Unix.ENOSPC, "write", "") ]' ],
                      Feb 05 05:12:06 xoa-fs xo-server[32410]:   call: undefined,
                      Feb 05 05:12:06 xoa-fs xo-server[32410]:   url: undefined,
                      Feb 05 05:12:06 xoa-fs xo-server[32410]:   task: task {
                      Feb 05 05:12:06 xoa-fs xo-server[32410]:     uuid: '0f812914-46c0-fe29-d563-1af7bca72d96',
                      Feb 05 05:12:06 xoa-fs xo-server[32410]:     name_label: '[XO] VM import',
                      Feb 05 05:12:06 xoa-fs xo-server[32410]:     name_description: '',
                      Feb 05 05:12:06 xoa-fs xo-server[32410]:     allowed_operations: [],
                      Feb 05 05:12:06 xoa-fs xo-server[32410]:     current_operations: {},
                      Feb 05 05:12:06 xoa-fs xo-server[32410]:     created: '20240205T10:07:04Z',
                      Feb 05 05:12:06 xoa-fs xo-server[32410]:     finished: '20240205T10:12:06Z',
                      Feb 05 05:12:06 xoa-fs xo-server[32410]:     status: 'failure',
                      Feb 05 05:12:06 xoa-fs xo-server[32410]:     resident_on: 'OpaqueRef:85a049dc-296e-4ef0-bdbc-82e2845ecd68',
                      Feb 05 05:12:06 xoa-fs xo-server[32410]:     progress: 1,
                      Feb 05 05:12:06 xoa-fs xo-server[32410]:     type: '<none/>',
                      Feb 05 05:12:06 xoa-fs xo-server[32410]:     result: '',
                      Feb 05 05:12:06 xoa-fs xo-server[32410]:     error_info: [
                      Feb 05 05:12:06 xoa-fs xo-server[32410]:       'IMPORT_ERROR',
                      Feb 05 05:12:06 xoa-fs xo-server[32410]:       'INTERNAL_ERROR: [ Unix.Unix_error(Unix.ENOSPC, "write", "") ]'
                      Feb 05 05:12:06 xoa-fs xo-server[32410]:     ],
                      Feb 05 05:12:06 xoa-fs xo-server[32410]:     other_config: { object_creation: 'complete' },
                      Feb 05 05:12:06 xoa-fs xo-server[32410]:     subtask_of: 'OpaqueRef:NULL',
                      Feb 05 05:12:06 xoa-fs xo-server[32410]:     subtasks: [],
                      Feb 05 05:12:06 xoa-fs xo-server[32410]:     backtrace: '(((process xapi)(filename lib/backtrace.ml)(line 210))((process xapi)(filename ocaml/xapi/import.ml)(line 2021))((process xapi)(filename ocaml/xapi/server_>
                      Feb 05 05:12:06 xoa-fs xo-server[32410]:   }
                      Feb 05 05:12:06 xoa-fs xo-server[32410]: }
                      Feb 05 05:12:06 xoa-fs xo-server[32410]: 2024-02-05T10:12:06.930Z xo:api WARN admin@admin.net | vm.importMultipleFromEsxi(...) [5m] =!> Error: no opaque ref found in  undefined
                      
                      1 Reply Last reply Reply Quote 0
                      • A Offline
                        acomav
                        last edited by

                        I patched my XO source VM with the latest from 5th Feb and still had the same error.
                        "stack": "Error: no opaque ref found in undefined

                        It may be I am not patching correctly so I have added a XOA trial and moved to the 'latest' channel and have ping @florent with a support tunnel to test in the morning.

                        florentF 1 Reply Last reply Reply Quote 0
                        • florentF Offline
                          florent Vates 🪐 XO Team @acomav
                          last edited by

                          @acomav you're up to date on your XOA

                          I pushed a new commit , fixing an async condition on the fix_xva_import_thin branch . Feel free to test on your XO from source.

                          J A 2 Replies Last reply Reply Quote 2
                          • R Offline
                            rmaclachlan
                            last edited by

                            Thank you @florent for all your help! We got the VM to import now, I will try the other failed VM outside business hours but I expect it will work now as well!

                            florentF 1 Reply Last reply Reply Quote 1
                            • florentF Offline
                              florent Vates 🪐 XO Team @rmaclachlan
                              last edited by

                              @rmaclachlan said in Import from VMware fails after upgrade to XOA 5.91:

                              Thank you @florent for all your help! We got the VM to import now, I will try the other failed VM outside business hours but I expect it will work now as well!

                              thank you for the help

                              1 Reply Last reply Reply Quote 0
                              • J Offline
                                jasonmap @florent
                                last edited by

                                @florent Nice! This latest change allowed my migration to complete successfully. Seems like the peak transfer speed was about 70Mbps. 4.77GB in 5 minutes. I'm guessing the thin/zeros made this so fast?

                                florentF 1 Reply Last reply Reply Quote 0
                                • florentF Offline
                                  florent Vates 🪐 XO Team @jasonmap
                                  last edited by florent

                                  @jasonmap said in Import from VMware fails after upgrade to XOA 5.91:

                                  @florent Nice! This latest change allowed my migration to complete successfully. Seems like the peak transfer speed was about 70Mbps. 4.77GB in 5 minutes. I'm guessing the thin/zeros made this so fast?

                                  yay
                                  the thin make it fast(espcially since it only need one pass instead of two in the previous api), XCP is a little faster to load xva , and there is some magic . No secret though, everything is done in public

                                  we invested a lot of time and energy to make it work fast, and we have more in pipeline, to make it work in more case ( vsan I am looking at you) or to access more easy the content of running VM

                                  1 Reply Last reply Reply Quote 0
                                  • R Offline
                                    rmaclachlan
                                    last edited by

                                    Just a quick update - I imported a handful of VMs today and was even able to move over the VM that failed on the weekend so I think that patch works @florent

                                    1 Reply Last reply Reply Quote 0
                                    • A Offline
                                      acomav @florent
                                      last edited by

                                      @florent
                                      Thanks. I have kicked off an Import but it takes 2 hours however....the first small virtual disk has now been successful whereas it was failing, so I am confident the rest will work. Will update then.

                                      Thanks

                                      A 1 Reply Last reply Reply Quote 0
                                      • A Offline
                                        acomav @acomav
                                        last edited by

                                        @acomav

                                        Import completed. Great work @florent.

                                        1 Reply Last reply Reply Quote 1
                                        • A Offline
                                          acomav
                                          last edited by

                                          Hi, a question about these patches and thin provisioning.

                                          My test import now works, however, it fully provisioned the full size of the disk on an NFS SR.

                                          [root@XXXX ~]# ls -salh /mnt/NFS/d8ad046d-c279-5bd6-8ed7-43888187f188/
                                          total 540G
                                          4.0K drwxr-xr-x  2 root root 4.0K Feb  6 09:33 .
                                          4.0K drwxr-xr-x 27 root root 4.0K Feb  1 21:22 ..
                                          151G -rw-r--r--  1 root root 151G Feb  6 10:45 1c3b93da-de07-4a4f-8229-60635bc2f279.vhd
                                           13G -rw-r--r--  1 root root  13G Feb  6 09:43 1eae9130-e6eb-45be-ae25-a7dcb7ee8f4e.vhd
                                          171G -rw-r--r--  1 root root 171G Feb  6 10:51 751b7a5f-df32-4cb1-9479-e196671e7149.vhd
                                          

                                          The two large disks are in an LVM VG on the source and combined, use up 253 GB of the 320 GB LV. They are thin provisioned on the VMware side.

                                          Am I wrong to expect the vhd files on the NFS SR to be smaller than what I see? Does LVM on the source negate thin provisioning on the xcp-ng side?

                                          Not a big deal, I am just curious.

                                          Thanks

                                          florentF olivierlambertO 2 Replies Last reply Reply Quote 0
                                          • florentF Offline
                                            florent Vates 🪐 XO Team @acomav
                                            last edited by

                                            thank you all, now time to do a patch release

                                            @acomav said in Import from VMware fails after upgrade to XOA 5.91:

                                            Hi, a question about these patches and thin provisioning.

                                            My test import now works, however, it fully provisioned the full size of the disk on an NFS SR.

                                            [root@XXXX ~]# ls -salh /mnt/NFS/d8ad046d-c279-5bd6-8ed7-43888187f188/
                                            total 540G
                                            4.0K drwxr-xr-x  2 root root 4.0K Feb  6 09:33 .
                                            4.0K drwxr-xr-x 27 root root 4.0K Feb  1 21:22 ..
                                            151G -rw-r--r--  1 root root 151G Feb  6 10:45 1c3b93da-de07-4a4f-8229-60635bc2f279.vhd
                                             13G -rw-r--r--  1 root root  13G Feb  6 09:43 1eae9130-e6eb-45be-ae25-a7dcb7ee8f4e.vhd
                                            171G -rw-r--r--  1 root root 171G Feb  6 10:51 751b7a5f-df32-4cb1-9479-e196671e7149.vhd
                                            

                                            The two large disks are in an LVM VG on the source and combined, use up 253 GB of the 320 GB LV. They are thin provisioned on the VMware side.

                                            Am I wrong to expect the vhd files on the NFS SR to be smaller than what I see? Does LVM on the source negate thin provisioning on the xcp-ng side?

                                            Not a big deal, I am just curious.

                                            Thanks

                                            LVM is thick provisionned on XCP side : https://xcp-ng.org/docs/storage.html#storage-types

                                            A 1 Reply Last reply Reply Quote 0
                                            • First post
                                              Last post