VMware migration tool: we need your feedback!
-
@olivierlambert Would love to see some way to have it import larger than 2TiB disks as multiple disks in XCP-ng, since most OSes just let you span drives anyway.
Just realized I may not be able to leave VMWare with this method since one of the disks on a VM I'm trying to move is over 3TiB.
-
We could detect 2TiB+ drives and create a
raw
disk on our side, but it won't support snap nor live storage migration.Only another format (used in SMAPIv3) will allow us to solve this.
-
@olivierlambert Gotcha, this makes sense.
Is there any way to skip a specific drive with this migration script? I'm thinking I could skip the larger than 2TiB disk and then just create 2 x 2TiB disks after migration, span them in Windows, and then copy the data manually from the VMWare VM.
-
I think we could probably skip it (since it's likely not a system disk) so you can then manually copy the rest the way you prefer. We should probably add an option like "just skip 2TiB+ disk without failing"
-
@olivierlambert Yes, I think that would be great, this would be a good workaround for people that have larger than 2TiB disks.
-
@olivierlambert Also, do you know if the disk is OVER 2TiB thick provisioned but actual data usage on it is like 1TiB, will the script still fail or will it just create the 1TiB disk?
-
The problem by doing that automatically is you can have bad surprises. We'll probably just skip it after adding the option.
-
@florent none of the drives are over 2tb, largest is 900GB, so I will assume it is due to snapshots?
Drives are
127GB
900GB
325GB
250GB
500GB -
@severhart as Flo said, it's because of your VMware version (6.5+) that is using another "diff" format not yet supported. In your case, you should do a cold migration for now, until we support this diff format
-
Total noob here jumping in at the deep end.
I get
xo-cli vm.importFromEsxi host=xxx.xxx.xxx.xxx user=w...w password='u .... l' sslVerify=false vm=16 network=a1044bf9-4c06-8ae0-060c-e3462dd4524f sr=9b465ed4-e6d2-7a67-b5e0-5edc4915adac stopSource=true thin=true ā Cannot read properties of undefined (reading 'stream') JsonRpcError: Cannot read properties of undefined (reading 'stream') at Peer._callee$ (/opt/xo/xo-builds/xen-orchestra-202302081722/node_modules/json-rpc-peer/dist/index.js:139:44) at tryCatch (/opt/xo/xo-builds/xen-orchestra-202302081722/node_modules/@babel/runtime/helpers/regeneratorRuntime.js:44:17) at Generator.<anonymous> (/opt/xo/xo-builds/xen-orchestra-202302081722/node_modules/@babel/runtime/helpers/regeneratorRuntime.js:125:22) at Generator.next (/opt/xo/xo-builds/xen-orchestra-202302081722/node_modules/@babel/runtime/helpers/regeneratorRuntime.js:69:21) at asyncGeneratorStep (/opt/xo/xo-builds/xen-orchestra-202302081722/node_modules/@babel/runtime/helpers/asyncToGenerator.js:3:24) at _next (/opt/xo/xo-builds/xen-orchestra-202302081722/node_modules/@babel/runtime/helpers/asyncToGenerator.js:22:9) at /opt/xo/xo-builds/xen-orchestra-202302081722/node_modules/@babel/runtime/helpers/asyncToGenerator.js:27:7 at new Promise (<anonymous>) at Peer.<anonymous> (/opt/xo/xo-builds/xen-orchestra-202302081722/node_modules/@babel/runtime/helpers/asyncToGenerator.js:19:12) at Peer.exec (/opt/xo/xo-builds/xen-orchestra-202302081722/node_modules/json-rpc-peer/dist/index.js:182:20)
-
Hey @magicker can you give us more details? Are you using XOA or XO from the sources? At which version? Also on VMware side, what's your ESXi version?
-
@olivierlambert said in VMware migration tool: we need your feedback!:
XO from the sources
XO from github (2 days old)
esxi .. 7.0.0 (Build 16324942)
-
So the diff for warm migration isn't supported on ESXi 7.0 (yet!). So your VM must be halted first (on the VMware side I mean)
-
-
@olivierlambert ah I see.. other than that!! it works.. Just like magic!! v cool
-
Executing the following command resulted in an error until I powered the VM off:
ESXi host ==> VMware ESXi, 6.5.0, 19092475
Any ideas?
[11:02 10] xoa@xoa:~$ xo-cli vm.importFromEsxi host=xxx.xxx.xxx.xxx user=root password=secret sslVerify=false vm=262 network=13d8ab8a-dfdc-1e5c-0e35-0028af26987a sr=e748751e-02fd-28ae-5fa9-d58f5f0dc50a stopSource=true thin=true ā Cannot read properties of undefined (reading 'stream') JsonRpcError: Cannot read properties of undefined (reading 'stream') at Peer._callee$ (/usr/local/lib/node_modules/xo-cli/node_modules/json-rpc-peer/dist/index.js:139:44) at tryCatch (/usr/local/lib/node_modules/xo-cli/node_modules/@babel/runtime/helpers/regeneratorRuntime.js:44:17) at Generator.<anonymous> (/usr/local/lib/node_modules/xo-cli/node_modules/@babel/runtime/helpers/regeneratorRuntime.js:125:22) at Generator.next (/usr/local/lib/node_modules/xo-cli/node_modules/@babel/runtime/helpers/regeneratorRuntime.js:69:21) at asyncGeneratorStep (/usr/local/lib/node_modules/xo-cli/node_modules/@babel/runtime/helpers/asyncToGenerator.js:3:24) at _next (/usr/local/lib/node_modules/xo-cli/node_modules/@babel/runtime/helpers/asyncToGenerator.js:22:9) at /usr/local/lib/node_modules/xo-cli/node_modules/@babel/runtime/helpers/asyncToGenerator.js:27:7 at new Promise (<anonymous>) at Peer.<anonymous> (/usr/local/lib/node_modules/xo-cli/node_modules/@babel/runtime/helpers/asyncToGenerator.js:19:12) at Peer.exec (/usr/local/lib/node_modules/xo-cli/node_modules/json-rpc-peer/dist/index.js:182:20) [11:02 10] xoa@xoa:~$
-
Same answer than my previous post: since ESXi 6.5, there's a new diff algorithm @florent is working on it, but it's even more complicated than the "legacy" one
-
@olivierlambert oh ok, I saw that and thought your comment was specific to the ESXi 7 version.
Thanks for the speedy response. I'd love to know how you clone yourself to be so responsive 24/7. Or am I talking to a Vates instance of ChatGPT?
-
Sometimes I wonder But mostly because community is always my priority, regardless the fact we are a company with more than 30 people nowadays (with everything included on what is means).
-
new issue.. I am trying to pull a vm (powered off) from a server I have already pulled 3 vms from (no problems)
However, this time it takes a few seconds before the command ends.
I can see the vm however, no disk is pulled over at all.
no obvious errors in logs.