Best Way to Live Migrate XO VM
-
Hi,
If the migration doesn't work, it means there's something wrong happening in your VM (it doesn't cooperate). Check your memory settings to be sure dynamic min = dynamic max = static max.
Then, check from the OS perspective if there's enough free RAM and if the OS isn't frozen or something that could cause the migration to fail.
Live migration should work at 100% of the time when the guest cooperate, so there's clearly a problem.
-
@olivierlambert I have migrated other VMs just fine. Never had a problem. It kinda makes sense migrating the XO VM hiccups a little bit. My vCenter VM does the same for 30-40secs when I VMotion it in vCenter, but it fully recovers. My XO VM hasn't recovered in XO to this point. And to be clear..yes, my XO VM is on my XCP Hosts...not in VMW.
How do I do what you ask? I tried to SSH into the XO VM (and this is on sources in my lab, btw) and it won't connect. I don't remember how much RAM I gave it. I want to say 2 vCPU and 4GB RAM..maybe 8GB. I can log into XO-Lite, but there is no menu to view my XO VM resources. Ideas/thoughts?
Thanks Olivier -
@olivierlambert If I SSH onto the XCP Host it was orig on (XCP Host1) and run xe vm-list it shows on there and running; If I SSH onto XCP Host2 xe vm-list shows it on there and running too.
Also...the XO VM was fully updated; no outstanding commits. I just updated it earlier this morning. -
@olivierlambert And, below is the msg I get when I attempt to force shut the XO VM down directly on each XCP host:
xe vm-shutdown name-label=nkc-xo force=true
The server failed to handle your request, due to an internal error. The given message may give details useful for debugging the problem.
message: Object with type VM and id 95ad90cc-85b0-98c6-d81d-61ba82742947/config does not exist in xenopsd -
I got the XO VM back up by rebooting my Pool Master XCP Host which forced the VM to shutdown. Not thrilled this happened. Migrating the XO VM in XO needs work. So, for the process of rebooting XCP Hosts after applying patches, either via RUP or manually, use the following process (Vates Team correct as needed):
1. Apply patches
2. Migrate all VMs off the Master XCP Host. If the XO VM is on
this Host, do not migrate
3. Log out of XO
4. SSH into the XO VM and shut it down
5. Log into the Master XCP Host XO-Lite
6. SSH into the Master XCP Host and reboot it
7. Once the Master XCP Host is back online, power the XO
VM back on via XO-Lite
8. Migrate VMs back to Master XCP Host as needed
9. If Master role changed to another Host, from the Pool >
Advanced tab, change the Master back to desired XCP
Host
10. Reboot other XCP Hosts in Pool to finalize applying
patches. Before doing so, make sure to migrate VMs off -
Now that my XO VM is back up...I tried to live migrate some of my VMs in my Pool and migrations are failing (Linux & Windows), and I can no longer see the VM's console. They're basically "hung" it seems. And, I can't power them down. The same behavior has now happened to them as my XO VM. Live migrations did work ok before the issue I had with my XO VM.
-
Sorry for all the comments/posts...but I think I may have figured out why the failures occurred. I can again live migrate VMs after I shut down my XO VM then rebooted both of my XCP Hosts. Of course, I've not re-attempted live migrating my XO VM. Too risky! Again, that should be fixed.
One of the requirements of live migration is guest tools needs to be installed, correct? So, it appears..if an attempt is made to live migrate a VM which doesn't have tools installed, it "hangs". I have several VMs like that. They're software appliances and currently I'm prevented from installing tools on them. After the non-tools installed VM hangs attemping a migration, if you attempt to live migrate a VM which does have tools installed, it will also hang...and my guess is because of the failure of the 1st VM without tools installed. Not sure why. If that is the case, that needs to be fixed as well. If this is not accurate, I would like to know why this migration hang/failure behavior occurs. I can re-attempt to live migrating a VM which doesn't have tools installed...but don't want to go through the hassle of powering everything down and rebooting my Hosts to get it working again
-
As for many (all?) virtualization platform, you need to get the tools installed on your VMs to get an optimal usage of it.
-
@olivierlambert Yep..agreed. I may test this behavior out again to be sure what I think may have happened...was indeed my issue with regards to the non-tools VMs vs Tools VMs.
For the XO VM though...it would be highly beneficial to have the ability to live migrate that to other XCP Hosts in a pool for obvious reasons (update/maintenance). Can that be looked into, please?
Thanks! -
I'm not aware of any issue to live migrate XOA VM
We do that very often, as per many many customers and users all around the world. If you are using XO from the sources, I would suggest to try instead with XOA (even Free version is enough)
-
@olivierlambert Hmm...ok. Yes, I'm using "sources" version. Why does it matter which? Both are a Linux OS with the XO app on it, correct? It's just one is manually created & other is applianced-based; but you can still SSH into each the same, correct? Of course, I may be way off there
-
And again..I expect a momentary "blip" during migration for obvious reasons. But, it should still be able to recover through that and eventually succeed; and realistically, not in several minutes (10+min), but like other VMs...within a minute or 2. But...at some point, I may download a trial of XOA and attempt migration. I just don't have any other phys hosts to test with. I guess I can remove my current ones temporarily from my current Pool and XO then add them briefly to the XOA...we'll see.
-
@olivierlambert Hey Olivier...I came across another issue today on this XO VM. Not sure if you recall...but a week or so ago I created a post on here on how best to import an XO VM from VMware to XCP. I ended up finding a 'klunky' way to do so using the OVA import process. So this is the issue I encountered -> when I imported this VM, the import process changes a couple things in the guest...the vnic name in the Linux OS gets changed (from like ens# > enX#), and for the VM to see the network, the netplan config file needs to have the nic name/label changed in it. Another issue occurs..and this the main thing I wanted to share here -> the disk device name in the Linux OS changes as well from /dev/sd# (sda1, sdb1...whichever) to /dev/xvda1. Why is this a problem? Well, I was going to do a XO update this morning. Before doing so I thought it would be good to update my Ubuntu 24.04 OS. I ran sudo apt update && sudo apt upgrade -y and it failed. Why? An EFI pkg can't find the boot partition it seems. I get the following error: mount: /var/lib/grub/esp: special device /dev/sda1 does not exist. ; and then dpkg: error processing package grub-efi-amd64-signed (--configure): as well as dpkg: error processing package shim-signed (--configure) The root of those 2 pkg update issue stems I believe from the disk device label/name changing. For anyone who imports Linux VMs and performs an update/upgrade, they'll run into this error and will not be able to update their VM. The server is fine...you can log into it; apps still run; etc....but you can't install any package or update any package. Are you all aware of this?
Thanks. -
@coolsport00 said in Best Way to Live Migrate XO VM:
I guess I can remove my current ones temporarily from my current Pool and XO then add them briefly to the XOA...we'll see.
You can run several xo/xoa, connected to the same pool, at the same time. It can be quite handy some times.
The easiest way to deploy axoa
is fromxo-lite
-
@ph7 Thanks for the tip! Appreciate it.