Issue after latest host update
-
@RealTehreal Not being able to help BUT what I guess caused this was your point 4. You must restart/reboot the host after updates not restart toolstack.....
-
@manilx
The documentation states, that toolstack should be restarted after updates. That's why I always did it that way:https://docs.xcp-ng.org/management/updates/#from-command-line
But anyway, the issues started after restarting the host.
-
@RealTehreal I stand corrected. We always do the rolling pool update and let xoa take care of all of this
-
Have you done a simple
dmesg
and check the output? -
@olivierlambert I just did, but looks fine to me (dmesg.txt).
I just tried to designate one of the slaves as the new master. Still cannot start VMs. I will now eject all slaves, reinstall XCP-NG on one of them, add it to the pool again and make it the new master. Then I'll try again. If that doesn't work as well, I'll reinstall on the third device, create a new pool for this device and try again.
-
Could you do a mem test on the current master?
What kind of storage are you using?
-
dmesg looks like, probably something else borked here.
You wrote in the reddit thread that you were able to start VM's but they never actually started and the task was stuck at 1.000 progress, is that still the case after electing a new master?If yes, check "xentop" on the host where the VM was started to see if it's consuming resources.
-
Yeah I'm baffled because this is not something we've seen before on a "normal" setup, I really wonder where the problem lies
-
@olivierlambert The issue started on all three hosts after the latest update via
yum update
. I can't think of three devices having faulty memory, just one after another. Before the issue, I used a NFS share as VM storage. But I already deployed XOA on local storage (LVM). Same issue on all three hosts.@nikade First, I'll redeploy XOA on the pool master and take a look at
xentop
. Regardingxsconfig
, every VM runs with one vCore at 100% all the time and not responding to anything.xe vm-list
always lists them as running, though. Being in this state, the only way to shut down VMs is forced shutdown, since they won't react to soft shutdown command.I never had such issues, either. I'm running my setup for about a year now, did several updates via cli. Likewise, I'm baffled, that everything suddenly went down the flush, too.
-
xentop
shows XOA consuming 100.0 CPU (%), meaning one core. But quick deployment is stuck at "almost there", until it times out. The VM is still consuming one CPU core, while not being accessible. -
I cant really understand what happend to be honest, i've done this many times without issues.
What can you see in the console tab of the VM when u start it? Or in the stats tab? -
@RealTehreal What's the state of the network stack is it up and what's the activity percentage?
-
-
@nikade said in Issue after latest host update:
I cant really understand what happend to be honest, i've done this many times without issues.
What can you see in the console tab of the VM when u start it? Or in the stats tab?I can'T see anything, because XOA itself is inaccessible, since it's a VM. And VMs won't start into a usable state.
-
@RealTehreal said in Issue after latest host update:
@nikade said in Issue after latest host update:
I cant really understand what happend to be honest, i've done this many times without issues.
What can you see in the console tab of the VM when u start it? Or in the stats tab?I can'T see anything, because XOA itself is inaccessible, since it's a VM. And VMs won't start into a usable state.
Anything in the XCP-ng 8.2.1 host logs for it attempting to start the VM and generally? It may hold clues, about any underlying issues.
Also any appropriate logs for the NFS storage server would help, as that may reveal anything that can be causing issues on its end.
-
Any specific MTU settings?
-
A way to check if it's not network related would be using a local SR to boot a VM and see if it works.
-
@john-c I already took a look at dmesg and /var/log/xensource.log (I crawled through >1k log lines) and couldn't find anything revealing. The NFS server is unrelated, because, as stated before, I currently only use host's local storage to eliminate possible external issues.
-
@olivierlambert That's what I'm doing, to make sure, it's not a network related issue.
-
@olivierlambert I didn't change anything, at least. Just
yum update
and it went down the flush.