VMware migration tool: we need your feedback!
-
@brezlord Yes, looks like the disks themselves are thin but whatever was used thick wise is what the disk is inflated to.
i.e. without the thin=true, a 100GB thick disk on ESXi with only 20GB used would end up taking up 100GB on the SR, but if you grow the disk beyond the 100GB it's all thin at that point, so you could change it to a 200GB and it's still using only 100GB of actual space.
BUT with thin=true, the 100GB thick disk on ESXi that only has 20GB used would ONLY use up 20GB of space on the SR in XCP-ng (so it'd only be inflated to 20GB while still showing the OS it's a 100GB disk). Saves a lot of space if you have VMs with massive disks and low usage (like one of the hosts I am migrating which has 13TB assigned thick disks but only about 4TB is used).
@olivierlambert did I get this right?
-
@planedrop great explanation planedrop, I may use it later
-
@brezlord the imported VM should have the network, all set to the networkId of the command line. I will look into this
-
@brezlord : i pushed a fix for the network missing
-
@planedrop If your SR is thick (local LVM, iSCSI), it doesn't matter, the disk will be always the total size of the disk, regardless
thin=true
in the VMware migration tool. This last option is relevant for thin SR (local ext, NFS etc.) -
@florent I get the following error now.
root@xoa:~# xo-cli vm.importFromEsxi host=192.168.40.203 user='root' password='obfuscated ' sslVerify=false vm=30 sr=accb1cf1-92b7-5d47-e2c4-e7d8a282c448 network=83594c5b-8b5b-b45f-d3a7-7e5301468dc8 thin=true ā HANDLE_INVALID(network, OpaqueRef:478f9e9d-7592-40a2-ab07-10a0a6982e45) JsonRpcError: HANDLE_INVALID(network, OpaqueRef:478f9e9d-7592-40a2-ab07-10a0a6982e45) at Peer._callee$ (/opt/xo/xo-builds/xen-orchestra-202301251747/node_modules/json-rpc-peer/dist/index.js:139:44) at tryCatch (/opt/xo/xo-builds/xen-orchestra-202301251747/node_modules/@babel/runtime/helpers/regeneratorRuntime.js:44:17) at Generator.<anonymous> (/opt/xo/xo-builds/xen-orchestra-202301251747/node_modules/@babel/runtime/helpers/regeneratorRuntime.js:125:22) at Generator.next (/opt/xo/xo-builds/xen-orchestra-202301251747/node_modules/@babel/runtime/helpers/regeneratorRuntime.js:69:21) at asyncGeneratorStep (/opt/xo/xo-builds/xen-orchestra-202301251747/node_modules/@babel/runtime/helpers/asyncToGenerator.js:3:24) at _next (/opt/xo/xo-builds/xen-orchestra-202301251747/node_modules/@babel/runtime/helpers/asyncToGenerator.js:22:9) at /opt/xo/xo-builds/xen-orchestra-202301251747/node_modules/@babel/runtime/helpers/asyncToGenerator.js:27:7 at new Promise (<anonymous>) at Peer.<anonymous> (/opt/xo/xo-builds/xen-orchestra-202301251747/node_modules/@babel/runtime/helpers/asyncToGenerator.js:19:12) at Peer.exec (/opt/xo/xo-builds/xen-orchestra-202301251747/node_modules/json-rpc-peer/dist/index.js:182:20)
vm.importFromEsxi { "host": "192.168.40.203", "user": "root", "password": "* obfuscated *", "sslVerify": false, "vm": "30", "sr": "accb1cf1-92b7-5d47-e2c4-e7d8a282c448", "network": "83594c5b-8b5b-b45f-d3a7-7e5301468dc8", "thin": true } { "code": "HANDLE_INVALID", "params": [ "network", "OpaqueRef:478f9e9d-7592-40a2-ab07-10a0a6982e45" ], "call": { "method": "network.get_MTU", "params": [ "OpaqueRef:478f9e9d-7592-40a2-ab07-10a0a6982e45" ] }, "message": "HANDLE_INVALID(network, OpaqueRef:478f9e9d-7592-40a2-ab07-10a0a6982e45)", "name": "XapiError", "stack": "XapiError: HANDLE_INVALID(network, OpaqueRef:478f9e9d-7592-40a2-ab07-10a0a6982e45) at Function.wrap (/opt/xo/xo-builds/xen-orchestra-202301251747/packages/xen-api/src/_XapiError.js:16:12) at /opt/xo/xo-builds/xen-orchestra-202301251747/packages/xen-api/src/transports/json-rpc.js:37:27 at AsyncResource.runInAsyncScope (node:async_hooks:204:9) at cb (/opt/xo/xo-builds/xen-orchestra-202301251747/node_modules/bluebird/js/release/util.js:355:42) at tryCatcher (/opt/xo/xo-builds/xen-orchestra-202301251747/node_modules/bluebird/js/release/util.js:16:23) at Promise._settlePromiseFromHandler (/opt/xo/xo-builds/xen-orchestra-202301251747/node_modules/bluebird/js/release/promise.js:547:31) at Promise._settlePromise (/opt/xo/xo-builds/xen-orchestra-202301251747/node_modules/bluebird/js/release/promise.js:604:18) at Promise._settlePromise0 (/opt/xo/xo-builds/xen-orchestra-202301251747/node_modules/bluebird/js/release/promise.js:649:10) at Promise._settlePromises (/opt/xo/xo-builds/xen-orchestra-202301251747/node_modules/bluebird/js/release/promise.js:729:18) at _drainQueueStep (/opt/xo/xo-builds/xen-orchestra-202301251747/node_modules/bluebird/js/release/async.js:93:12) at _drainQueue (/opt/xo/xo-builds/xen-orchestra-202301251747/node_modules/bluebird/js/release/async.js:86:9) at Async._drainQueues (/opt/xo/xo-builds/xen-orchestra-202301251747/node_modules/bluebird/js/release/async.js:102:5) at Immediate.Async.drainQueues [as _onImmediate] (/opt/xo/xo-builds/xen-orchestra-202301251747/node_modules/bluebird/js/release/async.js:15:14) at processImmediate (node:internal/timers:471:21) at process.callbackTrampoline (node:internal/async_hooks:130:17)" }
-
@brezlord that looks lik an invlaid network id , are you sure it's ok ?
(this was not visible before since the code was skipping VIFs creation) -
@florent It's copied direct from XO web UI.
-
Are you sure it's not a PIF? Can you do a
xe network-param-list uuid=<UUID>
?If it doesn't work, then do it for a PIF:
xe pif-param-list uuid=<UUID>
-
@olivierlambert said in VMware migration tool: we need your feedback!:
xe network-param-list uuid=
yes you are right I had an error in the uuid. I copied it from the host and not the pool.
-
Does it work now? In XO6, we'll do everything to avoid the confusion between a PIF and a network, it should be more clear than it is today.
-
@olivierlambert yes. I very excited to test out XO6.
-
Just as some additional feedback about this, I tried with thin=true today on a small 20GB Ubuntu VM and it worked great!!
I do have a suggestion though, I'd love to see a task in XOA about reading the blocks from the ESXi VM. When I entered the command, I thought nothing was working because a task never started, but when I checked network stats on the host it was very clear it was reading from ESXi. Once it finished reading, it imported the disk (which created a task) and the actual VHD space used is only 7.5GB!
-
Yes, as I explain, the first pass is just reading the whole VMware disk once. It's only doing the transfer on second pass.
That's why there's no XCP-ng task in the first pass (since until now, we only have XCP-ng tasks, there's no XO tasks).
But it's changing, we are about to release XO tasks, so it will be easier to track the job
-
@olivierlambert Totally makes sense, this is great news.
Thanks again! I'll be testing the tool on some very large VMs in the coming weeks so I'll report back if anything weird happens with those.
-
@florent When I import a windows 10 VM from ESXi to XCP-ng the boot firmware was incorrectly set to bios when the source VM was uefi. I would also be advantageous to have the mac address(s) copied over by default or at least on option.
Thanks,
Simon -
@brezlord nice catch, I will work on it today
@planedrop : task progression is in the work with @julien-f , and I hope they'll reach maturity soon
-
@brezlord mac address and uefi should works now
-
@florent I will rebuild XO and re-import and give feedback. Thanks.
-
It's now available directly on
master
(from the sources) or onlatest
XOA release channel (I updated the first post accordingly)