Migrations after updates
-
While testing the latest 8.3 updates (may have been happening before) I noticed my vms will migrate back over to host 1 when it come back online so both host are running equal ran/cpu usage (4 vms) 2 on each host. When i finish updating host 2 and it come back online i expect 2 of vms to migrate back to host2. They do not, but will if i manually migrate them.
My load balancer settings... Is this expected behavior or should i modify my settings?


-
@acebmxer Using performance mode should migrate your VM's so all systems are equally "performant" the different modes are outlined here https://docs.xcp-ng.org/management/vm-load-balancing/
As to why your VMs not migrate, I would only be guessing - anyways my guess is that it only polls the system every so often and if your systems are performing well enough nothing gets moved...
-
This reminds me that I have some work to do on my policy, mostly for anti-affinity.
-
ping @bastien-nollet
-
Hi @acebmxer,
I think the reason for this is a feature we recently added that prevents VMs from moving back-and-forth between hosts. VMs now have a cooldown (default 30min) between 2 load-balancer-triggered migrations
Can you try to set the migration cooldown to 0 and tell us if it fixes this behaviour? (in the "Advanced" section of the load balancer configuration)
-
I was playing around with the settings last night and the only way i could get behavior i was looking for was to set free memory limit.
I did try your suggestion this morning. I reconfigured settings back to those in screenshot. Applied your recommendation and then but host 2 in maintenance mode. waited a few minutes. Took out of maintenance mode and vms still sitting on host 1.
I dont seem to have this issue on my work production servers. I guess to little vms and to little load in the home lab. I think in the past i may not have noticed it as I would be doing some other work in a vm or two generating load.
Unless other have different recommendations i think having free memory set to half the host ram will keep this even. Are there any unforeseen issues i should be aware in this config for home lab?
I am used to VMware just spreading vms across all host evenly then shuffling around the load based off cpu accordingly.
Hello! It looks like you're interested in this conversation, but you don't have an account yet.
Getting fed up of having to scroll through the same posts each visit? When you register for an account, you'll always come back to exactly where you were before, and choose to be notified of new replies (either via email, or push notification). You'll also be able to save bookmarks and upvote posts to show your appreciation to other community members.
With your input, this post could be even better 💗
Register Login