Hi,
I have a 5 host XCP 8.2 setup with 380GB per host, SAN, HA etc and XOA. Last night I performed a rolling pool update and it successfully worked through evacuating, updating and rebooting each host before starting to migrate VMs back to their original hosts. In the middle of migrating VMs back to their original host, multiple VMs failed with "not enough memory" errors. When I checked in the morning I had one host with only a few GB free RAM and other hosts with ~200GB free - very unbalanced and definitely not what I was expecting.
I've checked to forum and not found any other RPU issues reported with this stage of the process - if I've missed this please let me know.
With 1 host completely evacuated the remaining hosts were at ~85%, so there is plenty of space to shuffle VMs about, but I guess the RPU tried to move some VMs before enough had been shifted off the target to make room for them?
We manually distribute VMs for high availability and load balancing, so would ideally like them to return to their original locations automatically when done.
How can I ensure that the final "migrate VMs back" step completes successfully in the future?
The error was:
"message": "HOST_NOT_ENOUGH_FREE_MEMORY(34642853888, 3430486016)",
"name": "XapiError",
"stack": "XapiError: HOST_NOT_ENOUGH_FREE_MEMORY(34642853888, 3430486016)\n at Function.wrap (file:///usr/local/lib/node_modules/xo-server/node_modules/xen-api/_XapiError.mjs:16:12)\n at default (file:///usr/local/lib/node_modules/xo-server/node_modules/xen-api/_getTaskResult.mjs:13:29)\n at Xapi._addRecordToCache (file:///usr/local/lib/node_modules/xo-server/node_modules/xen-api/index.mjs:1068:24)\n at file:///usr/local/lib/node_modules/xo-server/node_modules/xen-api/index.mjs:1102:14\n at Array.forEach (<anonymous>)\n at Xapi._processEvents (file:///usr/local/lib/node_modules/xo-server/node_modules/xen-api/index.mjs:1092:12)\n at Xapi._watchEvents (file:///usr/local/lib/node_modules/xo-server/node_modules/xen-api/index.mjs:1265:14)"
Thanks in advance for any pointers,
Neal.