Live Migrate between two stand alone pools fails
-
Also, show us the output of the following command --
xe vm-list uuid=dfecebe0-6ac1-b511-85ee-b43aa0223578
-
@Danp xe
vm-list uuid=dfecebe0-6ac1-b511-85ee-b43aa0223578 uuid ( RO) : dfecebe0-6ac1-b511-85ee-b43aa0223578 name-label ( RW): Lab - Redhat8 power-state ( RO): running
-
@Danp Which logs in particular would you think might have some information? Nothing stands out, but the logs are pretty busy.
I've not tried a warm migration
-
@DustinB said in Live Migrate between two stand alone pools fails:
@Danp Which logs in particular would you think might have some information? Nothing stands out, but the logs are pretty busy.
I've not tried a warm migration
I also don't have a warm migration option, since these hosts are all standalone pools.
-
I would check
var/log/xensource.log
first. If the error occurs immediately, then you should be able totail
the file and watch for the error to occur. -
@DustinB said in Live Migrate between two stand alone pools fails:
I also don't have a warm migration option
Did you look under the VM's Advanced tab in XO?
-
@Danp said in Live Migrate between two stand alone pools fails:
I would check
var/log/xensource.log
first. If the error occurs immediately, then you should be able totail
the file and watch for the error to occur.Yea it's not instant, it just kind of times out.
All VM's are using an shared NFS storage repo as well. I might just tell the team that in order to complete my work I need a window that I can power everything down and start it on a different host.
-
@Danp said in Live Migrate between two stand alone pools fails:
@DustinB said in Live Migrate between two stand alone pools fails:
I also don't have a warm migration option
Did you look under the VM's Advanced tab in XO?
Yeah
-
It should be there unless your XO is way out of date.
-
@Danp said in Live Migrate between two stand alone pools fails:
It should be there unless your XO is way out of date.
That is very possible, there are currently 3 different XO's running, for reason I won't go into.
I'm trying to patch the host all to the same level and consolidate everything under the latest XO, and ideally create a pool out of all 3 hosts. I'm not sure that I'll be able to pool them all if there are VM's running on any given host.
-
This post is deleted! -
You could always try searching for the error in the logs with something like the following --
cat /var/log/xensource.log | grep "VM_BAD_POWER_STATE"
-
cat /var/log/xensource.log | grep "VM_BAD_POWER_STATE" Jan 11 05:42:59 xen-vm xapi: [error|xen-vm|26628549 |Async.VM.migrate_send R:b60d2c889e8c|xapi] Caught Server_error(VM_BAD_POWER_STATE, [ OpaqueRef:92efd4b1-f1c4-4760-91e9-8c7f744cdeea; halted, suspended; running ]) while destroying VM uuid=4ea00c91-c306-c551-87a3-ba0c0fd8bf61 on destination host Jan 11 08:28:03 xen-vm xapi: [error|xen-vm|26719641 |Async.VM.migrate_send R:cc817c26ddbd|backtrace] VM.assert_can_migrate D:f6152f70a65f failed with exception Server_error(VM_BAD_POWER_STATE, [ OpaqueRef:5008bd88-786a-4bde-a428-b137113a94d6; Halted; Running ]) Jan 11 08:28:03 xen-vm xapi: [error|xen-vm|26719641 |Async.VM.migrate_send R:cc817c26ddbd|backtrace] Raised Server_error(VM_BAD_POWER_STATE, [ OpaqueRef:5008bd88-786a-4bde-a428-b137113a94d6; Halted; Running ]) Jan 11 08:28:03 xen-vm xapi: [error|xen-vm|26719641 ||backtrace] Async.VM.migrate_send R:cc817c26ddbd failed with exception Server_error(VM_BAD_POWER_STATE, [ OpaqueRef:5008bd88-786a-4bde-a428-b137113a94d6; Halted; Running ]) Jan 11 08:28:03 xen-vm xapi: [error|xen-vm|26719641 ||backtrace] Raised Server_error(VM_BAD_POWER_STATE, [ OpaqueRef:5008bd88-786a-4bde-a428-b137113a94d6; Halted; Running ])
-
And searching for the UUID I'm migrating.
cat /var/log/xensource.log | grep "dfecebe0-6ac1-b511-85ee-b43aa0223578" Jan 11 05:42:02 xen-vm xenopsd-xc: [debug|xen-vm|39 |Async.VM.migrate_send R:b60d2c889e8c|xenops] EVENT on other VM: dfecebe0-6ac1-b511-85ee-b43aa0223578 Jan 11 08:28:02 xen-vm xapi: [debug|xen-vm|26719641 |Async.VM.migrate_send R:cc817c26ddbd|audit] VM.migrate_send: VM = 'dfecebe0-6ac1-b511-85ee-b43aa0223578 (Lab - Redhat8)' Jan 11 08:28:02 xen-vm xapi: [debug|xen-vm|26719641 |VM.assert_can_migrate D:f6152f70a65f|audit] VM.assert_can_migrate: VM = 'dfecebe0-6ac1-b511-85ee-b43aa0223578 (Lab - Redhat8)' Jan 11 08:28:02 xen-vm xapi: [debug|xen-vm|587 |xapi events D:4a4835d5324c|xenops] Event on VM dfecebe0-6ac1-b511-85ee-b43aa0223578; resident_here = true Jan 11 08:28:02 xen-vm xapi: [ info|xen-vm|26719650 ||export] VM.export_metadata: VM = 'dfecebe0-6ac1-b511-85ee-b43aa0223578' ('Lab - Redhat8'); with_snapshot_metadata = 'true'; include_vhd_parents = 'false'; preserve_power_state = 'true Jan 11 08:28:02 xen-vm xapi: [debug|xen-vm|587 |xapi events D:4a4835d5324c|xenops] Event on VM dfecebe0-6ac1-b511-85ee-b43aa0223578; resident_here = true Jan 11 08:28:03 xen-vm xapi: [debug|xen-vm|587 |xapi events D:4a4835d5324c|xenops] Event on VM dfecebe0-6ac1-b511-85ee-b43aa0223578; resident_here = true Jan 11 09:36:07 xen-vm xapi: [debug|xen-vm|26725037 UNIX /var/lib/xcp/xapi||cli] xe vm-list uuid=dfecebe0-6ac1-b511-85ee-b43aa0223578 username=root password=(omitted) Jan 11 09:36:14 xen-vm xapi: [debug|xen-vm|26725079 UNIX /var/lib/xcp/xapi||cli] xe vm-list uuid=dfecebe0-6ac1-b511-85ee-b43aa0223578 username=root password=(omitted)
-
Server_error(VM_BAD_POWER_STATE, [ OpaqueRef:5008bd88-786a-4bde-a428-b137113a94d6; Halted; Running ]
Based on the description here, it appears to be detecting a halted VM, but the VM is expected to be running.
Are you able to location a VM with the UUID of
5008bd88-786a-4bde-a428-b137113a94d6
on either server?P.S. You could also try this from one of your other XO VMs or even spin up a new one to see if that solves the problem.
-
@Danp said in Live Migrate between two stand alone pools fails:
Server_error(VM_BAD_POWER_STATE, [ OpaqueRef:5008bd88-786a-4bde-a428-b137113a94d6; Halted; Running ]
Based on the description here, it appears to be detecting a halted VM, but the VM is expected to be running.
Are you able to location a VM with the UUID of
5008bd88-786a-4bde-a428-b137113a94d6
on either server?P.S. You could also try this from one of your other XO VMs or even spin up a new one to see if that solves the problem.
Good find, the UUID 5008bd88-786a-4bde-a428-b137113a94d6 is somehow attached to a different VM.
But when I look at the VM itself, it has a different UUID (within XO).
-
Okay now I'm just confused and this must be an XO version bug, I added the 3rd host (that I was wanting to migrate these VMs too) and it shows the VM there.
So maybe this is just a display bug.
-
It's difficult to know for sure where the problem lies since your XO isn't up to date. Wouldn't the host need to already exist in XO in order for it to be a migration target?
-
@Danp said in Live Migrate between two stand alone pools fails:
It's difficult to know for sure where the problem lies since your XO isn't up to date. Wouldn't the host need to already exist in XO in order for it to be a migration target?
Yeah, I had two host connected to this XO instance, just to help get an insight to what is where.
I've disconnected the target host, and can see the VM on the desired host.
-