Vm.migrate Operation blocked
-
vm.migrate { "vm": "44db041e-8855-d142-e619-2e2819245a82", "migrationNetwork": "359e8975-0c69-e837-58ba-f6e8cd2e1140", "sr": "9290c72b-fdcd-cb03-9408-970ade25fcd0", "targetHost": "8adfc9f3-823d-4ce8-9971-237c785481a9" } { "code": 21, "data": { "objectId": "44db041e-8855-d142-e619-2e2819245a82", "code": "OPERATION_BLOCKED" }, "message": "operation failed", "name": "XoError", "stack": "XoError: operation failed at operationFailed (/home/node/xen-orchestra/packages/xo-common/src/api-errors.js:21:32) at file:///home/node/xen-orchestra/packages/xo-server/src/api/vm.mjs:482:15 at Object.migrate (file:///home/node/xen-orchestra/packages/xo-server/src/api/vm.mjs:469:3) at Api.callApiMethod (file:///home/node/xen-orchestra/packages/xo-server/src/xo-mixins/api.mjs:304:20)" }
After updating xcp-ng with latest patches i cant migrate my virtual machines anymore. They are all in the same pool on the same storage. Tried in both XOA and XCP-NG Center, same result.
We are using XCP-NG 8.2
Anyone have a clue where to look?
-
Can you post the output of the following?
xe vm-param-get uuid=xxxxxx param-name=blocked-operations
-
Thanks for answering!
xe vm-param-list uuid=46ca24f2-f699-6d33-657d-bbb74b495c97 | grep blocked-operations blocked-operations (MRW): migrate_send: VM_CREATED_BY_XENDESKTOP
So I deleted the blocked operations.
xe vm-param-clear uuid=46ca24f2-f699-6d33-657d-bbb74b495c97 param-name=blocked-operations
I then rebooted the machine and checked the allowed-operation
xe vm-param-list uuid=46ca24f2-f699-6d33-657d-bbb74b495c97 | grep allowed-operations allowed-operations (SRO): changing_dynamic_range; migrate_send; pool_migrate; changing_VCPUs_live; suspend; hard_reboot; hard_shutdown; clean_reboot; clean_shutdown; pause; checkpoint; snapshot
I can now see that migrate_send is back. But when i tried to migrate it again a new error message popped up.
vm.migrate { "vm": "46ca24f2-f699-6d33-657d-bbb74b495c97", "migrationNetwork": "359e8975-0c69-e837-58ba-f6e8cd2e1140", "sr": "9290c72b-fdcd-cb03-9408-970ade25fcd0", "targetHost": "6a310c4f-d3b6-414e-8ca8-2b05b27b45d9" } { "code": 21, "data": { "objectId": "46ca24f2-f699-6d33-657d-bbb74b495c97", "code": "VDI_ON_BOOT_MODE_INCOMPATIBLE_WITH_OPERATION" }, "message": "operation failed", "name": "XoError", "stack": "XoError: operation failed at operationFailed (/home/node/xen-orchestra/packages/xo-common/src/api-errors.js:21:32) at file:///home/node/xen-orchestra/packages/xo-server/src/api/vm.mjs:482:15 at Object.migrate (file:///home/node/xen-orchestra/packages/xo-server/src/api/vm.mjs:469:3) at Api.callApiMethod (file:///home/node/xen-orchestra/packages/xo-server/src/xo-mixins/api.mjs:304:20)" }
Do you have any idea where i can poke around next?
Thanks
-
I think you shouldn't do that. If there's operations blocked, there's a reason.
The reason here is probably Citrix Desktop or Citrix App (XenDesktop).
Also, on desktop virt, XenDesktop is using a specific mode, which doesn't save what's written after shutdown. So it's not possible to migrate those VMs.
You should ask Citrix if what you want to do is possible with XenDesktop. In any case, this is not related to an XCP-ng problem but the way XenDesktop is interacting with your VMs.
Also, I'm please tell us what they think about having XenDesktop on top of XCP-ng We'd like to have official support, but I think they won't care.
-
I have now conluded the problem is related to MCS (Machine Creation Services). MCS does not support XenMotion Storage. When you move vms with regular XenMotion it all works as expected. We are pretty new to MCS thats why this problem shoved up.
If you migrate VMs in XenOrchestra with the default settings without touching the migration network it works fine because its using XenMotion
If you choose a migration network like the picture below it won't work because its using XenMotion Storage
Even if XenDesktop is not officially supported on XCP-ng it has been working fine for us. We have 80 terminal servers on 18 XCP-ng, been running it for years, no big problems.
Anyway thanks for your response.
-
Great to know So it works as "expected". That would be wonderful to have your XCP-ng hosts covered by our support so you get the entire stack supported!