@Pilow hi...yes; thank you for that. I see the Hosts seem to have updated, but not able to reboot due to some VM (doesn't share the friendly/display VM name)...it appears. See below:
{
"id": "0mi3j7qhj",
"properties": {
"poolId": "06f0d0d0-5745-9750-12b5-f5698a0dfba2",
"poolName": "XCP-Lab",
"progress": 0,
"name": "Rolling pool update",
"userId": "dd12cef8-919e-4ab7-97ae-75253331c84f"
},
"start": 1763407364311,
"status": "failure",
"updatedAt": 1763407364581,
"tasks": [
{
"id": "9vj4btqb1qk",
"properties": {
"name": "Listing missing patches",
"total": 2,
"progress": 100
},
"start": 1763407364314,
"status": "success",
"tasks": [
{
"id": "yglme9v6vz",
"properties": {
"name": "Listing missing patches for host 42f6368c-9dd9-4ea3-ac01-188a6476280d",
"hostId": "42f6368c-9dd9-4ea3-ac01-188a6476280d",
"hostName": "nkc-xcpng-2.nkcschools.org"
},
"start": 1763407364316,
"status": "success",
"end": 1763407364318
},
{
"id": "tou8ffgte7",
"properties": {
"name": "Listing missing patches for host 1f991575-e08d-4c3d-a651-07e4ccad6769",
"hostId": "1f991575-e08d-4c3d-a651-07e4ccad6769",
"hostName": "nkc-xcpng-1.nkcschools.org"
},
"start": 1763407364317,
"status": "success",
"end": 1763407364318
}
],
"end": 1763407364319
},
{
"id": "0i923wgi5wlg",
"properties": {
"name": "Updating and rebooting"
},
"start": 1763407364319,
"status": "failure",
"end": 1763407364578,
"result": {
"code": "CANNOT_EVACUATE_HOST",
"params": [
"VM_LACKS_FEATURE,OpaqueRef:492194ea-9ad0-b759-ab20-8f72ffbb0cbb"
],
"call": {
"duration": 248,
"method": "host.assert_can_evacuate",
"params": [
" session id ",
"OpaqueRef:0619ffdc-782a-b854-c350-5ce1cc354547"
]
},
"message": "CANNOT_EVACUATE_HOST(VM_LACKS_FEATURE,OpaqueRef:492194ea-9ad0-b759-ab20-8f72ffbb0cbb)",
"name": "XapiError",
"stack": "XapiError: CANNOT_EVACUATE_HOST(VM_LACKS_FEATURE,OpaqueRef:492194ea-9ad0-b759-ab20-8f72ffbb0cbb)\n at Function.wrap (file:///opt/xo/xo-builds/xen-orchestra-202511170838/packages/xen-api/_XapiError.mjs:16:12)\n at file:///opt/xo/xo-builds/xen-orchestra-202511170838/packages/xen-api/transports/json-rpc.mjs:38:21\n at runNextTicks (node:internal/process/task_queues:65:5)\n at processImmediate (node:internal/timers:453:9)\n at process.callbackTrampoline (node:internal/async_hooks:130:17)"
}
}
],
"end": 1763407364581,
"result": {
"code": "CANNOT_EVACUATE_HOST",
"params": [
"VM_LACKS_FEATURE,OpaqueRef:492194ea-9ad0-b759-ab20-8f72ffbb0cbb"
],
"call": {
"duration": 248,
"method": "host.assert_can_evacuate",
"params": [
" session id ",
"OpaqueRef:0619ffdc-782a-b854-c350-5ce1cc354547"
]
},
"message": "CANNOT_EVACUATE_HOST(VM_LACKS_FEATURE,OpaqueRef:492194ea-9ad0-b759-ab20-8f72ffbb0cbb)",
"name": "XapiError",
"stack": "XapiError: CANNOT_EVACUATE_HOST(VM_LACKS_FEATURE,OpaqueRef:492194ea-9ad0-b759-ab20-8f72ffbb0cbb)\n at Function.wrap (file:///opt/xo/xo-builds/xen-orchestra-202511170838/packages/xen-api/_XapiError.mjs:16:12)\n at file:///opt/xo/xo-builds/xen-orchestra-202511170838/packages/xen-api/transports/json-rpc.mjs:38:21\n at runNextTicks (node:internal/process/task_queues:65:5)\n at processImmediate (node:internal/timers:453:9)\n at process.callbackTrampoline (node:internal/async_hooks:130:17)"
}
}
I checked all my VMs and they are all on shared storage..so not sure why a VM is not able to live migrate?...
Thanks.