VMware import stuck at "Importing..."
-
Hello
I'm currently in the process of migrating VMs from ESXi 6.7 to XCP-NG, using the built-in VMware import feature in xo-server. Some random and small Debian and Windows VMs (60-80GB) migrated smoothly and quickly, without any issues.
However, I've hit a roadblock with a particularly large VM (1.5TB) that refuses to complete its migration. The VM I'm attempting to migrate is a vCenter Server Appliance with 16 virtual disks. The initial stages of the import go swiftly, with the smaller virtual disks completing rapidly. However, the larger one takes significantly more time. After a few hours of monitoring the task list, the larger disk also finishes its migration.
The concern arises from the fact that the importing task remains stuck at the "started" status, with no activity apparent. Furthermore, the name of the VM continues to display as "[Importing...]-My-VCSA." Attempting to start the VM results in a message indicating that the VM is still in the process of migrating.
I am migrating the VCSA because I urgently need to reduce the number of hosts in my home lab due to rising electricity costs.
Here are some additional details:
I am connecting directly to the ESXi Host where my VCSA is located, and the VCSA is powered off.
My xo-server version is 5.122.0, and the VM has an adequate number of CPU cores and RAM.
The ESXi version is 6.7.0 Build 15160138.
Is there another log file I can consult? Below is the task log for reference:https://pastebin.com/cJinVk83
//edit
I have some additional concerns. It seems that during the migration process, something on the ESXi host crashes. After all virtual disks are copied over, the ESXi host becomes highly unresponsive. The ESXi WebUI ceases to function, and attempts to shut down the host via the console also fail, often getting stuck at "Shutting down..." It's possible that certain services on the ESXi host have become stuck, preventing the migration from completing. -
Yeah, sounds like the ESXi host is having trouble A crash on this side would explain why it's not finishing.
What version of XO or XOA are you using?
-
@olivierlambert
XO is on version 5.122.0 -
XO or XOA? If XO from the sources, the commit number should be the same on
master
as we speak now. Otherwise, upgrade. Also, check your Node version -
@olivierlambert
XO from the source. I'm at c8bfd (22.09.2023).
But apparently there are newer commits.
I'll upgrade. -
Okay, and what about Node version? Even if I suspect the issue is more on the ESXi side… But I'm far from being an expert in there to check what's going on…
-
Node: v18.17.1
-
That looks fine to me
-
okay.
I'm also think (as I write this) that something might be wrong with the ESXi setup. So, I should check the ESXi logs.
It's interesting though that other VMs migrated just well.On the XO/XCP-NG side, are there any other log files I can look at besides the one for the import job?
-
You can check the
xo-server
's direct output.But yeah, I think ESXi is a better lead. Maybe for many disks at once or bigger disks it explodes. I have to tell I discovered with very much surprise that ESXi wasn't really resilient when exporting disks
-
ok.
maybe I find a clue on the ESXi logs. And when not. I'll take my chances using clonezilla. -
@s-master I was exporting some VMS from esxi via XO a few months ago and exporting machines with one or two drives seems to work okay. Failing that, can your esxi system export to OVA?if not, that, your clonezilla approach will probably work. Sometimes you just have to bite the bullet and start over again from scratch. It sucks, it's horrible but it is better than a half-assed copy that flakes out at you at random times.
-
Hmmmmm I need to look back through some of my old posts and notes but I had a similar issue with a VM with large number of disks. The disks themselves being big wasn't the issue, it was just having a huge number of them that eventually cased a problem. Ended up migrating by moving a single VHD and rebuilding things (the specific system didn't really "need" all the VHDs so it was an easy fix for me, wouldn't be the case here haha).
If you are trying to reduce the number of hosts though, why would migrating the VCSA help? If you're powering off a host then wouldn't you not need the VCSA to be running anyway? Or do you have more than 1 ESXi instance? Might be easiest to just migrate it to another ESXi host if that's the goal, since you won't need the VCSA once you fully move to XCP-ng (assuming you are doing so).