@ha_tu_su said in Other 2 hosts reboot when 1 host in HA enabled pool is powered off:
I have created XOSTOR shared storage using disks from all 3 hosts.
Can you elaborate on how you achieved this and what settings you used?
Operators from our Vates Pro Support
@ha_tu_su said in Other 2 hosts reboot when 1 host in HA enabled pool is powered off:
I have created XOSTOR shared storage using disks from all 3 hosts.
Can you elaborate on how you achieved this and what settings you used?
Can you try the new Rust Guest tools out of curiosity? (don't forget to remove the old ones first).
Wget/download this deb and install it: https://gitlab.com/xen-project/xen-guest-agent/-/jobs/6041686362/artifacts/file/target/release/xen-guest-agent_0.4.0_amd64.deb
If the other hosts are rebooting, it means the storage heartbeat is failing for all hosts. It's really hard to answer "like this", without reading literally tons of logs. This might be a pretty complex problem to solve.
We can try to reproduce internally though.
Because you are writing faster in the VM than the garbage collector can merge/coalesce.
You could try to change the leaf coalesce timeout value to see if it's better.
@CodeMercenary Doesn't seem like an XO / XCP-ng issue. I would be troubleshooting it to see why neither NFS device will mount. How full are these devices?
Weird, this PR should have fixed most of it: https://github.com/vatesfr/xen-orchestra/pull/7841
@frank-s This PR is where the update occurred -- https://github.com/vatesfr/xen-orchestra/pull/7836
@florent should be able to explain the meaning.