VMware migration tool: we need your feedback!
-
Does it work now? In XO6, we'll do everything to avoid the confusion between a PIF and a network, it should be more clear than it is today.
-
@olivierlambert yes. I very excited to test out XO6.
-
Just as some additional feedback about this, I tried with thin=true today on a small 20GB Ubuntu VM and it worked great!!
I do have a suggestion though, I'd love to see a task in XOA about reading the blocks from the ESXi VM. When I entered the command, I thought nothing was working because a task never started, but when I checked network stats on the host it was very clear it was reading from ESXi. Once it finished reading, it imported the disk (which created a task) and the actual VHD space used is only 7.5GB!
-
Yes, as I explain, the first pass is just reading the whole VMware disk once. It's only doing the transfer on second pass.
That's why there's no XCP-ng task in the first pass (since until now, we only have XCP-ng tasks, there's no XO tasks).
But it's changing, we are about to release XO tasks, so it will be easier to track the job
-
@olivierlambert Totally makes sense, this is great news.
Thanks again! I'll be testing the tool on some very large VMs in the coming weeks so I'll report back if anything weird happens with those.
-
@florent When I import a windows 10 VM from ESXi to XCP-ng the boot firmware was incorrectly set to bios when the source VM was uefi. I would also be advantageous to have the mac address(s) copied over by default or at least on option.
Thanks,
Simon -
@brezlord nice catch, I will work on it today
@planedrop : task progression is in the work with @julien-f , and I hope they'll reach maturity soon
-
@brezlord mac address and uefi should works now
-
@florent I will rebuild XO and re-import and give feedback. Thanks.
-
It's now available directly on
master
(from the sources) or onlatest
XOA release channel (I updated the first post accordingly) -
@olivierlambert Awesome, super exciting stuff!
-
@florent Everything is working now, Thanks. I have sucsefully migrated Windows and Linux VMs.
-
Perfect! Now expect all of this but in a simple wizard in the UI, that will be a great tool for everyone who want to make the migration (and we hope to get a decent share of those users with our new support bundle)
-
This post is deleted! -
After running for hours, get the following
✖ sesparse Vmdk reading is not functionnal yet FP-FileServer/FP-FileServer-000001-sesparse.vmdk JsonRpcError: sesparse Vmdk reading is not functionnal yet FP-FileServer/FP-FileServer-000001-sesparse.vmdk at Peer._callee$ (/usr/local/lib/node_modules/xo-cli/node_modules/json-rpc-peer/dist/index.js:139:44) at tryCatch (/usr/local/lib/node_modules/xo-cli/node_modules/@babel/runtime/helpers/regeneratorRuntime.js:44:17) at Generator.<anonymous> (/usr/local/lib/node_modules/xo-cli/node_modules/@babel/runtime/helpers/regeneratorRuntime.js:125:22) at Generator.next (/usr/local/lib/node_modules/xo-cli/node_modules/@babel/runtime/helpers/regeneratorRuntime.js:69:21) at asyncGeneratorStep (/usr/local/lib/node_modules/xo-cli/node_modules/@babel/runtime/helpers/asyncToGenerator.js:3:24) at _next (/usr/local/lib/node_modules/xo-cli/node_modules/@babel/runtime/helpers/asyncToGenerator.js:22:9) at /usr/local/lib/node_modules/xo-cli/node_modules/@babel/runtime/helpers/asyncToGenerator.js:27:7 at new Promise (<anonymous>) at Peer.<anonymous> (/usr/local/lib/node_modules/xo-cli/node_modules/@babel/runtime/helpers/asyncToGenerator.js:19:12) at Peer.exec (/usr/local/lib/node_modules/xo-cli/node_modules/json-rpc-peer/dist/index.js:182:20)
-
@severhart the sesparse format is used for disk greater than 2TB , and for all disks after esxi 6.5 , and it's not very documented
https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.storage.doc/GUID-88E5A594-DEBC-4662-812F-EA421591C70F.html
we are working on implementing this reader to allow migration during Q1 2023For now esxi 6.5+ vm are limited to migration of sopped vm without snasphots
-
@florent thanks! for the fast reply, I will take the outage and rerun, and let you know outcome.
-
Also, 2TiB+ disk can't be imported since it's limited to 2TiB tops on XCP-ng default storage stack.
-
@olivierlambert Would love to see some way to have it import larger than 2TiB disks as multiple disks in XCP-ng, since most OSes just let you span drives anyway.
Just realized I may not be able to leave VMWare with this method since one of the disks on a VM I'm trying to move is over 3TiB.
-
We could detect 2TiB+ drives and create a
raw
disk on our side, but it won't support snap nor live storage migration.Only another format (used in SMAPIv3) will allow us to solve this.