I know what are their product claims, but I also know personally the inventor of DRBD, and we discussed this last month when I met him in his office. There's an inherent risk using DRBD on 2 node setup without an external arbiter.
Regarding custom field, that will be the case, check if there's an issue opened in XO Github repository.
Regarding VM order without vApp, I'm not sure it's supported, we already checked that on our side, and this doesn't really work as expected.
You last question: obviously, the goal is to be able to get rid of XCP-ng Center, but we won't introduce features that doesn't work correctly. Order field of a VM without a vApp doesn't work, at least last time we tested it.
regarding iSCSI-HA, the use of an external ip address for heuristics helps.
Regarding the last question : setting the HA-SR is working correct and necessary to be able to use HA at all AFAIK.
If you use an external machine, yes it works. But then it's not a 2 nodes scenario (it's 3 with the arbiter)
Regarding HA SR, it might be already done or in a GH issue on XO repo. If not, feel free to create one.
Depending on what you regard as a node. I would not regard a switch, which we take for the external ip, as a XEN node.
With iSCSI-HA you just need an IP address that is always reachable from every node. We use the IP address of a managed switch for that. No XCP-ng, no VMs, just a reachable IP address.
@olivierlambert Hey, so we've run some basic tests with High Availability (HA) in our environment, which currently runs on two hosts. I understand that you're suggesting that we need at least three nodes to have safe HA without data loss, but I guess I'm wondering what, specifically (if anything) we need to do to configure that third node to do the job that you're saying we need it for. Do we just need to add a third node, and XCP-ng will take care of that for us, or is there some kind of special configuration that we need with that third node?
Further, is there a way that we can set up VMs to return to their affinity host when it comes back up? I've tested a basic HA setup (with our two hosts), and our little test VM died, but it worked - it came back up on the still-living node, which was great. We don't need like, life-or-death mission critical HA, but having services automatically come back in the event of a downed host is pretty damn handy for the flow of daily business operations.
Nothing to do for the 3rd node. It's all a matter of quorum, which can't be reached with 2 entities
Regarding affinity, it's only on boot. And it's not mandatory to be compliant, XAPI will do its best. You might try to play with Xen Orchestra Load balancer plugin if you want to keep control on what's running where at any moment.
@olivierlambert So we just need three nodes for "quorum", but high availability suggests that a VM "won't" go down in certain instances, correct? If so, "where" is that VM running and would it be correct to say that that third node is present to make a vote as to "which" VM image is running?
Do you guys have a doc on your high availability implementation?
Take a look here: https://xen-orchestra.com/blog/xenserver-and-vm-high-availability/
Old but still relevant. XAPI will place the VM where there's enough resources.