Unable remove VM
-
I have a couple of VM that have got stuck in a paused state, on different hosts.
Force shutdown in XO eventually times out.I've tried following the instructions here -
https://support.citrix.com/article/CTX220777HA is not enabled on the pool
[11:33 LP1-XS-002 ~]# xe vm-shutdown uuid=4f81b4ce-c681-dec2-e147-090036de1a47 force=true ^C[11:34 LP1-XS-002 ~]# xe vm-reset-powerstate uuid=4f81b4ce-c681-dec2-e147-090036de1a47 force=true This operation cannot be completed because the server is still live. host: b72027de-5c53-4ebe-a324-60c1af946d52 (LP1-XS-002) [11:34 LP1-XS-002 ~]# list_domains | grep 4f8 73 | 4f81b4ce-c681-dec2-e147-090036de1a47 | D P H [11:34 LP1-XS-002 ~]# xl destroy 73 libxl: error: libxl_xshelp.c:201:libxl__xs_read_mandatory: xenstore read failed: `/libxl/73/type': No such file or directory libxl: warning: libxl_dom.c:54:libxl__domain_type: unable to get domain type for domid=73, assuming HVM
Anyone got any clues how to resolve this - I don't need these vm's they are just getting deleted, so a hard kill on them is fine.
-
I have a couple of VM that have got stuck in a paused state, on different hosts.
-
How did you confirm that the VM is in the wrong power state? Could be VDI instead.
-
Have you checked the logs for more details?
-
Have you checked under Dashboard > Health to ensure there aren't any VDIs attached to the Control Domain?
P.S. I previously used the method shown here to clear up this issue with a VDI
-
-
@danp said in Unable remove VM:
I have a couple of VM that have got stuck in a paused state, on different hosts.
- How did you confirm that the VM is in the wrong power state? Could be VDI instead.
By looking at the UI and seeing power state paused, and it failing to remove. I'm not sure how to check the state of a VDI. the VDI's are there, but the VM's won't boot or delete
- Have you checked the logs for more details?
yes, not entirely sure what might indicate a root cause. These relate to one of the VMs in question
Jul 29 10:52:09 LP1-XS-002 xenopsd-xc: [ info||22 ||xenops_server] Caught Xenops_interface.Xenopsd_error([S(Cancelled);S(4397606)]) executing ["VM_reboot",["4f81b4ce-c681-dec2-e147-090036de1a47",[]]]: triggering cleanup actions Jul 29 11:18:22 LP1-XS-002 xenopsd-xc: [ info||16 ||xenops_server] Caught Xenops_interface.Xenopsd_error([S(Cancelled);S(4398131)]) executing ["VM_poweroff",["4f81b4ce-c681-dec2-e147-090036de1a47",[]]]: triggering cleanup actions Jul 29 11:53:55 LP1-XS-002 xenopsd-xc: [ info||31 ||xenops_server] Caught Xenops_interface.Xenopsd_error([S(Cancelled);S(4398834)]) executing ["VM_poweroff",["4f81b4ce-c681-dec2-e147-090036de1a47",[]]]: triggering cleanup actions Jul 29 12:17:34 LP1-XS-002 xapi: [ warn||7833402 INET :::80|Async.VM.unpause R:c20f65c0d932|xenops] Potential problem: VM 4f81b4ce-c681-dec2-e147-090036de1a47 in power state 'paused' when expecting 'running'
- Have you checked under Dashboard > Health to ensure there aren't any VDIs attached to the Control Domain?
No VDI's attached to the control domain
P.S. I previously used the method shown here to clear up this issue with a VDI
-
VM is paused, only a running VM can be shutdown.
You can try to unpause it, but I suppose if it's paused it's because it crashed somehow.
Alternatively, you can do a reset VM power state.
-
I've tried that - i think this is the correct way ..
[12:54 LP1-XS-002 log]# xe vm-reset-powerstate uuid=4f81b4ce-c681-dec2-e147-090036de1a47 force=true This operation cannot be completed because the server is still live. host: b72027de-5c53-4ebe-a324-60c1af946d52 (LP1-XS-002)
-
Have you tried restarting the toolstack for the host could clear a bad VM state?
-
I have tried on one of the hosts, but makes no difference
-
What about unpause?
-
Are they using network storage or local storage? Can you see their consoles in XO?
-
This looks like it was a problem with the VDI's on a network shared storage.
Unpausing the VM's would "start" them but they never actually boot - no console would appear.In the end, migrating all other VM off the host and then rebooting the host cleared whatever was causing these VMs to be stuck and we were able to delete them
We're currently trialing a new storage provider, so this is definitely something we'll be looking into more with their support.