Could the "source" pool (Citrix Hypervisor) be the origin of this error?
Posts
-
RE: Can not start VM migration to other pool from XOA
-
RE: Can not start VM migration to other pool from XOA
Before I have migrated 100+ systems this way, with the same options. Now, it suddenly quits with the error.
-
RE: Can not start VM migration to other pool from XOA
No snaps available, problem persists
vm.migrate { "mapVdisSrs": { "5ebca28b-e870-4475-87b3-19a3ba6cff81": "70895525-ee2e-659b-970f-d43ea3a5991a" }, "mapVifsNetworks": { "0648c623-a09c-ee65-4d43-c008315c2bbc": "21166b7f-048e-be22-8d8d-38e6695ad8c6", "28a137b5-3b56-6549-dd20-b51c9ce9aced": "863a34d3-e7f4-585f-ec8f-9fcbad68d8ca", "af156628-b126-6845-009f-9ff8ae325ec0": "21166b7f-048e-be22-8d8d-38e6695ad8c6" }, "migrationNetwork": "0fc3ada6-5df3-7f95-bae2-6e7e7de34a21", "sr": "70895525-ee2e-659b-970f-d43ea3a5991a", "targetHost": "9fdcfa08-3155-4f21-9db1-5854d5517f14", "vm": "eeac308a-0fa5-9e24-d7af-b3eb44cbc616" } { "code": "VDI_NOT_IN_MAP", "params": [ "OpaqueRef:3244cfb3-e6cf-4587-916f-21f07c65ddf6" ], "task": { "uuid": "e660f16a-70bd-a387-30ca-9437bb9c2f91", "name_label": "Async.VM.assert_can_migrate", "name_description": "", "allowed_operations": [], "current_operations": {}, "created": "20241030T15:51:22Z", "finished": "20241030T15:51:22Z", "status": "failure", "resident_on": "OpaqueRef:685b98b6-758d-4171-a75b-5bebe43be748", "progress": 1, "type": "<none/>", "result": "", "error_info": [ "VDI_NOT_IN_MAP", "OpaqueRef:3244cfb3-e6cf-4587-916f-21f07c65ddf6" ], "other_config": {}, "subtask_of": "OpaqueRef:NULL", "subtasks": [], "backtrace": "(((process xapi)(filename ocaml/xapi/rbac.ml)(line 205))((process xapi)(filename ocaml/xapi/server_helpers.ml)(line 95)))" }, "message": "VDI_NOT_IN_MAP(OpaqueRef:3244cfb3-e6cf-4587-916f-21f07c65ddf6)", "name": "XapiError", "stack": "XapiError: VDI_NOT_IN_MAP(OpaqueRef:3244cfb3-e6cf-4587-916f-21f07c65ddf6) at Function.wrap (file:///usr/local/lib/node_modules/xo-server/node_modules/xen-api/_XapiError.mjs:16:12) at default (file:///usr/local/lib/node_modules/xo-server/node_modules/xen-api/_getTaskResult.mjs:13:29) at Xapi._addRecordToCache (file:///usr/local/lib/node_modules/xo-server/node_modules/xen-api/index.mjs:1041:24) at file:///usr/local/lib/node_modules/xo-server/node_modules/xen-api/index.mjs:1075:14 at Array.forEach (<anonymous>) at Xapi._processEvents (file:///usr/local/lib/node_modules/xo-server/node_modules/xen-api/index.mjs:1065:12) at Xapi._watchEvents (file:///usr/local/lib/node_modules/xo-server/node_modules/xen-api/index.mjs:1238:14)" }
- XO Server restarted from XOA: still a problem
- XOA server restarted (reboot): still a problem
-
RE: Can not start VM migration to other pool from XOA
Yes, all VM's have about 5 snapshots.
-
RE: Can not start VM migration to other pool from XOA
No ISO in the CD drive.
I tried this with different VM's, all with the same error: "VDI_NOT_IN_MAP"
-
Can not start VM migration to other pool from XOA
When I start a migration of a VM from one pool to the other, I got the following error.
id "0m2vs9n41" properties method "vm.migrate" params mapVdisSrs c943a1e8-4f48-4127-8ab3-53c90e999d42 "66fc4777-39d5-48de-79fd-4ef489d1267e" mapVifsNetworks 023926a5-88e1-1ae8-0dec-53be015876ec "863a34d3-e7f4-585f-ec8f-9fcbad68d8ca" 9d446505-bc89-cb64-a443-d2e674cea6bf "2f1dbbb9-7a3a-e327-c332-6a69ea962392" migrationNetwork "0fc3ada6-5df3-7f95-bae2-6e7e7de34a21" sr "66fc4777-39d5-48de-79fd-4ef489d1267e" targetHost "9fdcfa08-3155-4f21-9db1-5854d5517f14" vm "42caf63a-d9cb-1b55-c214-ad0aee29b527" name "API call: vm.migrate" userId "9b0fe680-c0f0-444d-9392-5b78154796e0" type "api.call" start 1730287105345 status "failure" updatedAt 1730287105497 end 1730287105497 result code "VDI_NOT_IN_MAP" params 0 "OpaqueRef:2806f8f1-9bd0-49da-8355-d17e3d2a4eba" task uuid "3c4d30d9-8ca4-4363-ee2e-a60900185d97" name_label "Async.VM.assert_can_migrate" name_description "" allowed_operations [] current_operations {} created "20241030T11:18:25Z" finished "20241030T11:18:25Z" status "failure" resident_on "OpaqueRef:685b98b6-758d-4171-a75b-5bebe43be748" progress 1 type "<none/>" result "" error_info 0 "VDI_NOT_IN_MAP" 1 "OpaqueRef:2806f8f1-9bd0-49da-8355-d17e3d2a4eba" other_config {} subtask_of "OpaqueRef:NULL" subtasks [] backtrace "(((process xapi)(filename ocaml/xapi/rbac.ml)(line 205))((process xapi)(filename ocaml/xapi/server_helpers.ml)(line 95)))" message "VDI_NOT_IN_MAP(OpaqueRef:2806f8f1-9bd0-49da-8355-d17e3d2a4eba)" name "XapiError" stack "XapiError: VDI_NOT_IN_MAP(OpaqueRef:2806f8f1-9bd0-49da-8355-d17e3d2a4eba)\n at Function.wrap (file:///usr/local/lib/node_modules/xo-server/node_modules/xen-api/_XapiError.mjs:16:12)\n at default (file:///usr/local/lib/node_modules/xo-server/node_modules/xen-api/_getTaskResult.mjs:13:29)\n at Xapi._addRecordToCache (file:///usr/local/lib/node_modules/xo-server/node_modules/xen-api/index.mjs:1041:24)\n at file:///usr/local/lib/node_modules/xo-server/node_modules/xen-api/index.mjs:1075:14\n at Array.forEach (<anonymous>)\n at Xapi._processEvents (file:///usr/local/lib/node_modules/xo-server/node_modules/xen-api/index.mjs:1065:12)\n at Xapi._watchEvents (file:///usr/local/lib/node_modules/xo-server/node_modules/xen-api/index.mjs:1238:14)"
Migrating from XCP-ng is not a problem.
-
RE: Advanced telemetry enabled, fix used, but not working
Thanks, this works!
-
Advanced telemetry enabled, fix used, but not working
Hello,
I have enabled "Advanced telemetry" for a host, but afterwards, visiting the page gives:
Access to file is not permitted: /usr/share/netdata/web//index.html
I used the fix on the host
sudo su - chown root:netdata -R /usr/share/netdata systemctl restart netdata
But I still got the error.
-
RE: Moving vm in pool fails with "HOST_NOT_ENOUGH_FREE_MEMORY" (there is enough memory)!
Since it was somewhat critical (urgent migration from other pool), I had to switch the pool masters and had to restart the xcp028.
But I downloaded the system log. Maybe you find the cause of the problem there?
BTW, before restarting the host, I was not able to download these logs (failed).
-
RE: Moving vm in pool fails with "HOST_NOT_ENOUGH_FREE_MEMORY" (there is enough memory)!
Hi @olivierlambert,
I am just trying to do a live migration with shared storage (NFS), memory only, without a storage migration.
Now, I'm freeing up xcp028 (which, btw, is pool master).
- Should a toolstack restart solve the issue?
- Or should I reboot the system?
- Before eventually a reboot, should I change the Pool Master to the other system?
-
RE: Moving vm in pool fails with "HOST_NOT_ENOUGH_FREE_MEMORY" (there is enough memory)!
Good afternoon @olivierlambert ,
NO, HA is not enabled.
-
Moving vm in pool fails with "HOST_NOT_ENOUGH_FREE_MEMORY" (there is enough memory)!
I have an XCP-ng pool with 2 hosts, each having 384GB memory.
Moving a VM (16GB) from one host to another host with enough free memory (254GB), results in an error: "HOST_NOT_ENOUGH_FREE_MEMORY"!
The log states:vm.migrate { "targetHost": "8b56a961-3eaa-45f7-833c-bea08565e7a9", "vm": "10afa9ed-44bf-733e-7f89-d26e1263f27b" } { "code": "HOST_NOT_ENOUGH_FREE_MEMORY", "params": [ "4334813184", "1790664704" ], "task": { "uuid": "a17234bc-2d21-57b7-c8d2-c5ba7050b9e2", "name_label": "Async.VM.pool_migrate", "name_description": "", "allowed_operations": [], "current_operations": {}, "created": "20241027T10:01:55Z", "finished": "20241027T10:01:55Z", "status": "failure", "resident_on": "OpaqueRef:0ea9e83b-d9d2-4e80-9176-634aea4c77cb", "progress": 1, "type": "<none/>", "result": "", "error_info": [ "HOST_NOT_ENOUGH_FREE_MEMORY", "4334813184", "1790664704" ], "other_config": {}, "subtask_of": "OpaqueRef:NULL", "subtasks": [], "backtrace": "(((process xapi)(filename ocaml/xapi/xapi_vm_helpers.ml)(line 618))((process xapi)(filename ocaml/xapi/xapi_vm_helpers.ml)(line 718))((process xapi)(filename ocaml/xapi/message_forwarding.ml)(line 1226))((process xapi)(filename lib/xapi-stdext-pervasives/pervasiveext.ml)(line 24))((process xapi)(filename lib/xapi-stdext-pervasives/pervasiveext.ml)(line 35))((process xapi)(filename ocaml/xapi/helpers.ml)(line 1356))((process xapi)(filename ocaml/xapi/message_forwarding.ml)(line 1225))((process xapi)(filename lib/xapi-stdext-pervasives/pervasiveext.ml)(line 24))((process xapi)(filename ocaml/xapi/rbac.ml)(line 205))((process xapi)(filename ocaml/xapi/server_helpers.ml)(line 95)))" }, "message": "HOST_NOT_ENOUGH_FREE_MEMORY(4334813184, 1790664704)", "name": "XapiError", "stack": "XapiError: HOST_NOT_ENOUGH_FREE_MEMORY(4334813184, 1790664704) at Function.wrap (file:///usr/local/lib/node_modules/xo-server/node_modules/xen-api/_XapiError.mjs:16:12) at default (file:///usr/local/lib/node_modules/xo-server/node_modules/xen-api/_getTaskResult.mjs:13:29) at Xapi._addRecordToCache (file:///usr/local/lib/node_modules/xo-server/node_modules/xen-api/index.mjs:1041:24) at file:///usr/local/lib/node_modules/xo-server/node_modules/xen-api/index.mjs:1075:14 at Array.forEach (<anonymous>) at Xapi._processEvents (file:///usr/local/lib/node_modules/xo-server/node_modules/xen-api/index.mjs:1065:12) at Xapi._watchEvents (file:///usr/local/lib/node_modules/xo-server/node_modules/xen-api/index.mjs:1238:14)" }
Should a toolstack restart do the trick?
-
RE: huge number of api call "sr.getAllUnhealthyVdiChainsLength" in tasks
Same problem here 300+ sessions and counting
XAPI restart did not solve the issue...