@pmcgrail said in Server will not migrate VMs to enter maintenance mode:
If the pool is set to HA the VMs need to be set to Restart the error occurs
If the pool is set to HA the VMs need to be set to Restart or the error occurs
@pmcgrail said in Server will not migrate VMs to enter maintenance mode:
If the pool is set to HA the VMs need to be set to Restart the error occurs
If the pool is set to HA the VMs need to be set to Restart or the error occurs
@olivierlambert
OK, so with both VMs set to Best Effort and HA disabled on the Pool the error does not occur.
If the pool is set to HA the VMs need to be set to Restart the error occurs
Only one setting works when the Pool in HA Mode, the VM Set to restart.
If I disable the HA Pool setting the VMs migrate as needed and no errors occurs regardless of the VMs HA Settings.
@Danp OK, so here is the situation....
Manual Migrations work manually regardless of the VM HA state...
Auto-Migration fails if anything but Restart is selected for the HA Mode.
Cluster in HA, VMs in best effort HA mode - Memory Error is thrown
Cluster in HA, VMs in disabled HA mode - Memory Error is thrown
Cluster in HA, VMs in restart HA mode - No Memory Error is thrown
@olivierlambert said in Server will not migrate VMs to enter maintenance mode:
host
I have three hosts with 1.5 TB of Memory, I have 2 VMs running on the hosts using less then 10 GB of Ram, so memory is not the issue.
I can manually migrate the VM and the host will go into maintenance mode.
The error is bogus, the issue may be more related to the XO VM running on the host and the host fails to suspend the VMS on the host.
If a server has a running VM such as orchestrator, the server throw an error "HOST_NOT_ENOUGH_FREE_MEMORY".
Manually migrating the VM solves the issue, but entering into maintenance mode should evacuate the hosts.
host.setMaintenanceMode
{
"id": "68e82a9c-5b0d-497c-98e7-2e8af13100e0",
"maintenance": true
}
{
"code": "HOST_NOT_ENOUGH_FREE_MEMORY",
"params": [
"OpaqueRef:9f140062-18cd-1d9b-1980-35f9c5fa2b7b"
],
"task": {
"uuid": "fbfbd332-1e40-5b13-36e1-379e9914f6c5",
"name_label": "Async.host.evacuate",
"name_description": "",
"allowed_operations": [],
"current_operations": {},
"created": "20250113T20:24:23Z",
"finished": "20250113T20:24:23Z",
"status": "failure",
"resident_on": "OpaqueRef:f557ff05-15ae-ed72-07cd-0837ae050369",
"progress": 1,
"type": "<none/>",
"result": "",
"error_info": [
"HOST_NOT_ENOUGH_FREE_MEMORY",
"OpaqueRef:9f140062-18cd-1d9b-1980-35f9c5fa2b7b"
],
"other_config": {},
"subtask_of": "OpaqueRef:NULL",
"subtasks": [],
"backtrace": "(((process xapi)(filename ocaml/xapi/xapi_host.ml)(line 614))((process xapi)(filename hashtbl.ml)(line 159))((process xapi)(filename hashtbl.ml)(line 165))((process xapi)(filename hashtbl.ml)(line 170))((process xapi)(filename ocaml/xapi/xapi_host.ml)(line 610))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 24))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 39))((process xapi)(filename ocaml/xapi/rbac.ml)(line 191))((process xapi)(filename ocaml/xapi/rbac.ml)(line 200))((process xapi)(filename ocaml/xapi/server_helpers.ml)(line 75)))"
},
"message": "HOST_NOT_ENOUGH_FREE_MEMORY(OpaqueRef:9f140062-18cd-1d9b-1980-35f9c5fa2b7b)",
"name": "XapiError",
"stack": "XapiError: HOST_NOT_ENOUGH_FREE_MEMORY(OpaqueRef:9f140062-18cd-1d9b-1980-35f9c5fa2b7b)
at Function.wrap (file:///usr/local/lib/node_modules/xo-server/node_modules/xen-api/_XapiError.mjs:16:12)
at default (file:///usr/local/lib/node_modules/xo-server/node_modules/xen-api/_getTaskResult.mjs:13:29)
at Xapi._addRecordToCache (file:///usr/local/lib/node_modules/xo-server/node_modules/xen-api/index.mjs:1047:24)
at file:///usr/local/lib/node_modules/xo-server/node_modules/xen-api/index.mjs:1081:14
at Array.forEach (<anonymous>)
at Xapi._processEvents (file:///usr/local/lib/node_modules/xo-server/node_modules/xen-api/index.mjs:1071:12)
at Xapi._watchEvents (file:///usr/local/lib/node_modules/xo-server/node_modules/xen-api/index.mjs:1244:14)"
}