@rzr said:
Hi @andrew, thank you for your feedback, the fallback option you're suggesting will work but it will downgrade the security of your system, we suggested to update clients:
If users need to take action, I would rather recommend users to do something that raises the security floor, like generating new keys with newer, future-looking ciphers, like ed25519:
ssh-keygen -t ed25519 -C "<email>"
for server in $servers do ; ssh-copy-id $server; done
@Bastien-Nollet
I was playing around with the settings last night and the only way i could get behavior i was looking for was to set free memory limit.
I did try your suggestion this morning. I reconfigured settings back to those in screenshot. Applied your recommendation and then but host 2 in maintenance mode. waited a few minutes. Took out of maintenance mode and vms still sitting on host 1.
I dont seem to have this issue on my work production servers. I guess to little vms and to little load in the home lab. I think in the past i may not have noticed it as I would be doing some other work in a vm or two generating load.
Unless other have different recommendations i think having free memory set to half the host ram will keep this even. Are there any unforeseen issues i should be aware in this config for home lab?
I am used to VMware just spreading vms across all host evenly then shuffling around the load based off cpu accordingly.
And to really round this out, the MTBF for any of these is in the millions of hours (1.2-3M), that's a use time of 136.968 - 342.46 years respectively.
Basically, if a drive dies, just replace it no matter what, but in the end the reliability of these drives is meant to outlast all of us.
Unless you actually need some specific function provided in some form-factor or model, don't bother.