VMware migration tool: we need your feedback!
-
@florent I have rebuilt from source and I get the following errors. The host is running VMware ESXi, 7.0.3, 20328353
root@xoa:~# xo-cli vm.importFromEsxi host=192.168.40.203 user='root' password='obfuscated ' sslVerify=false vm=12 sr=648548b5-a789-6c72-2518-407a12717fad network=0b3e9312-541d-a036-06b4-2bd63c53d852 β Cannot create property 'detailed' on string 'ubuntu-64' JsonRpcError: Cannot create property 'detailed' on string 'ubuntu-64' at Peer._callee$ (/opt/xo/xo-builds/xen-orchestra-202301240816/node_modules/json-rpc-peer/dist/index.js:139:44) at tryCatch (/opt/xo/xo-builds/xen-orchestra-202301240816/node_modules/@babel/runtime/helpers/regeneratorRuntime.js:44:17) at Generator.<anonymous> (/opt/xo/xo-builds/xen-orchestra-202301240816/node_modules/@babel/runtime/helpers/regeneratorRuntime.js:125:22) at Generator.next (/opt/xo/xo-builds/xen-orchestra-202301240816/node_modules/@babel/runtime/helpers/regeneratorRuntime.js:69:21) at asyncGeneratorStep (/opt/xo/xo-builds/xen-orchestra-202301240816/node_modules/@babel/runtime/helpers/asyncToGenerator.js:3:24) at _next (/opt/xo/xo-builds/xen-orchestra-202301240816/node_modules/@babel/runtime/helpers/asyncToGenerator.js:22:9) at /opt/xo/xo-builds/xen-orchestra-202301240816/node_modules/@babel/runtime/helpers/asyncToGenerator.js:27:7 at new Promise (<anonymous>) at Peer.<anonymous> (/opt/xo/xo-builds/xen-orchestra-202301240816/node_modules/@babel/runtime/helpers/asyncToGenerator.js:19:12) at Peer.exec (/opt/xo/xo-builds/xen-orchestra-202301240816/node_modules/json-rpc-peer/dist/index.js:182:20)
vm.importFromEsxi { "host": "192.168.40.203", "user": "root", "password": "* obfuscated *", "sslVerify": false, "vm": "12", "sr": "648548b5-a789-6c72-2518-407a12717fad", "network": "0b3e9312-541d-a036-06b4-2bd63c53d852" } { "message": "Cannot create property 'detailed' on string 'ubuntu-64'", "name": "TypeError", "stack": "TypeError: Cannot create property 'detailed' on string 'ubuntu-64' at set (file:///opt/xo/xo-builds/xen-orchestra-202301240816/@xen-orchestra/vmware-explorer/parsers/vmx.mjs:35:18) at set (file:///opt/xo/xo-builds/xen-orchestra-202301240816/@xen-orchestra/vmware-explorer/parsers/vmx.mjs:37:7) at file:///opt/xo/xo-builds/xen-orchestra-202301240816/@xen-orchestra/vmware-explorer/parsers/vmx.mjs:47:5 at Array.forEach (<anonymous>) at parseVmx (file:///opt/xo/xo-builds/xen-orchestra-202301240816/@xen-orchestra/vmware-explorer/parsers/vmx.mjs:45:20) at Esxi.getTransferableVmMetadata (file:///opt/xo/xo-builds/xen-orchestra-202301240816/@xen-orchestra/vmware-explorer/esxi.mjs:197:17) at processTicksAndRejections (node:internal/process/task_queues:95:5) at MigrateVm.migrationfromEsxi (file:///opt/xo/xo-builds/xen-orchestra-202301240816/packages/xo-server/src/xo-mixins/migrate-vm.mjs:171:28) at Xo.importFromEsxi (file:///opt/xo/xo-builds/xen-orchestra-202301240816/packages/xo-server/src/api/vm.mjs:1307:10) at Api.#callApiMethod (file:///opt/xo/xo-builds/xen-orchestra-202301240816/packages/xo-server/src/xo-mixins/api.mjs:394:20)" }
-
This post is deleted! -
@florent I'm ready to test when you have something, thanks.
-
@brezlord great, that is a usable mesage
could you post the ( or send by email ) , the vmx file ?
-
.encoding = "UTF-8" config.version = "8" virtualHW.version = "19" vmci0.present = "TRUE" floppy0.present = "FALSE" memSize = "2048" tools.upgrade.policy = "manual" sched.cpu.units = "mhz" vm.createDate = "1613734854100000" scsi0.virtualDev = "lsilogic" scsi0.present = "TRUE" sata0.present = "TRUE" sata0:0.startConnected = "FALSE" sata0:0.deviceType = "atapi-cdrom" sata0:0.clientDevice = "TRUE" sata0:0.fileName = "emptyBackingString" sata0:0.present = "TRUE" scsi0:0.deviceType = "scsi-hardDisk" scsi0:0.fileName = "Graylog-000001.vmdk" sched.scsi0:0.shares = "normal" sched.scsi0:0.throughputCap = "off" scsi0:0.present = "TRUE" ethernet0.virtualDev = "vmxnet3" ethernet0.shares = "normal" ethernet0.addressType = "vpx" ethernet0.generatedAddress = "00:50:56:8f:51:24" ethernet0.uptCompatibility = "TRUE" ethernet0.present = "TRUE" displayName = "Graylog" guestOS = "ubuntu-64" toolScripts.afterPowerOn = "TRUE" toolScripts.afterResume = "TRUE" toolScripts.beforeSuspend = "TRUE" toolScripts.beforePowerOff = "TRUE" tools.syncTime = "FALSE" uuid.bios = "42 0f 33 f8 9b 7f 7d 26-a8 bc 61 26 ee 46 16 22" vc.uuid = "50 0f dd 17 4c f8 3c 79-1f 26 ac 99 23 6a 06 d4" sched.cpu.min = "0" sched.cpu.shares = "normal" sched.mem.min = "0" sched.mem.minSize = "0" sched.mem.shares = "normal" migrate.encryptionMode = "opportunistic" ftcpt.ftEncryptionMode = "ftEncryptionOpportunistic" vmci0.id = "-297396701" cleanShutdown = "FALSE" ethernet0.networkName = "vLAN_40" uuid.location = "56 4d 44 f3 60 00 fc b5-f6 4b f4 e7 50 0a 74 f2" sched.cpu.affinity = "all" tools.guest.desktop.autolock = "FALSE" nvram = "Graylog.nvram" pciBridge0.present = "TRUE" svga.present = "TRUE" pciBridge4.present = "TRUE" pciBridge4.virtualDev = "pcieRootPort" pciBridge4.functions = "8" pciBridge5.present = "TRUE" pciBridge5.virtualDev = "pcieRootPort" pciBridge5.functions = "8" pciBridge6.present = "TRUE" pciBridge6.virtualDev = "pcieRootPort" pciBridge6.functions = "8" pciBridge7.present = "TRUE" pciBridge7.virtualDev = "pcieRootPort" pciBridge7.functions = "8" hpet0.present = "TRUE" sched.cpu.latencySensitivity = "normal" numa.autosize.cookie = "10012" numa.autosize.vcpu.maxPerVirtualNode = "1" pciBridge0.pciSlotNumber = "17" pciBridge4.pciSlotNumber = "21" pciBridge5.pciSlotNumber = "22" pciBridge6.pciSlotNumber = "23" pciBridge7.pciSlotNumber = "24" scsi0.pciSlotNumber = "16" ethernet0.pciSlotNumber = "160" vmci0.pciSlotNumber = "32" sata0.pciSlotNumber = "33" monitor.phys_bits_used = "45" vmotion.checkpointFBSize = "4194304" vmotion.checkpointSVGAPrimarySize = "4194304" softPowerOff = "FALSE" svga.guestBackedPrimaryAware = "TRUE" guestOS.detailed.data = "architecture='X86' bitness='64' distroName='Ubuntu' distroVersion='20.04' familyName='Linux' kernelVersion='5.4.0-88-generic' prettyName='Ubuntu 20.04.3 LTS'" toolsInstallManager.updateCounter = "4" viv.moid = "a3752e64-f7d6-4473-8fd5-3b91a194cccc:vm-7093:9iC7plvv3uF5K6K4G++QYV5VZf6Mjp3GrS0FdLa2/1w=" guestInfo.detailed.data = "architecture='X86' bitness='64' distroName='Ubuntu' distroVersion='20.04' familyName='Linux' kernelVersion='5.4.0-137-generic' prettyName='Ubuntu 20.04.3 LTS'" checkpoint.vmState.readOnly = "FALSE" SCSI0:0.ctkEnabled = "TRUE" ctkEnabled = "TRUE" sched.swap.derivedName = "/vmfs/volumes/dc0ddd0a-e7e89bd7/Graylog/Graylog-eda98aed.vswp" migrate.hostLog = "Graylog-50f77235.hlog" guestinfo.vmtools.buildNumber = "18090558" guestinfo.vmtools.description = "open-vm-tools 11.3.0 build 18090558" guestinfo.vmtools.versionNumber = "11360" guestinfo.vmtools.versionString = "11.3.0" scsi0:0.redo = "" vmotion.svga.mobMaxSize = "4194304" vmotion.svga.graphicsMemoryKB = "4096"
-
I edited your post to get the right "code" display (you need to have the three
```
on a new line) -
@olivierlambert thank and dually noted.
-
@brezlord I found it :
guestOS
is first used as a string, and then with additionnal propertiescan you pull and try again ?
-
@florent ok give me 10 to rebuild and report back.
-
@florent you've fixed it. The task has started. I'll report back when the import has finished. XO is saying 1 hour.
-
\o/
Now:
-
@brezlord you may also try the "thin=true" option, that will take longer, but will build a thin disk, without the unallocated sectors ( the progress is , for now, only visible in the xo logs side )
-
@florent I'll give it a go once this import has finished and report back.
-
@florent The import was successful, the VM is up and running. I will try another the "thin=true" option.
-
@brezlord said in VMware migration tool: we need your feedback!:
@florent The import was successful, the VM is up and running. I will try another the "thin=true" option.
yeah
thank you for your patience -
\o/
-
@florent thank you for you work.
-
@florent Great news. I'm looking forward to testing when it has been pushed to the vmware channel.
-
This will land on XOA
latest
for the end of the month -
@olivierlambert Got this working myself as well, very seamless once everything is prepped for it!
I do have a question though, since I noticed you mentioned the thin=true here, so doing this without that command creates a thick provisioned disk? Maybe I misunderstood that in the blog post.
I noticed mine took around 2.5 hours to complete a VM with 6 disks but the total data usage on disks was only around 20GB, so seems very slow, but if it's transferring the thick provisioning, then that is a different story. (in case it matters everything is 10GbE here).