-
@olivierlambert said in XOSTOR hyperconvergence preview:
Hi,
- A partition will work yes.
- On top of soft RAID: yes, should work too but I have no idea if it's a best practice or not. We can ask the LINSTOR guys
- Bigger and faster the network, the better. But feel free to test whatever you like during this phase
Thanks for getting back to me so quickly! Those are pretty much the answers I expected based on my experience with DRBD and what little I know of LINSTOR but wanted to confirm it if possible. My planned test for software RAID is three servers each with a 4-drive soft RAID 10.
-
This seems to be working for me. It's definitely working on top of software RAID. I'm able to move and copy VMs between XOSTOR and other SRs. Copies between a shared NFS v4 SR and XOSTOR are a little slow but I suspect that's more of a network bandwidth problem than anything else. If that's the case, being able to dedicate a network interface for XOSTOR use would probably take care of a lot of that.
The only problem I ran into with the installation was that one of the three servers was disconnected from the SR after it was created. After reconnecting it, things seem to be working well.
Speed seems to be a little better than running over the NFS SR.
Next question is where to go from here. Is there any specific testing you would like done or any information you would like about my test systems?
-
Play with it, snapshot, backup whatever What matters is resiliency.
Really happy to know it works well for you! More improvements are coming in January
-
@olivierlambert
After some more playing, I'm beginning to see some issues.One is that, I think, running HA on top of it definitely causes some issues. After setting up HA and setting it up to use XOSTOR as the state/heartbeat SR, things appeared to work but since doing that, all three servers experienced crashes in pretty short succession after running successfully for several hours but have run successfully for a day or two since then. While running with HA enabled and using XOSTOR, the logs fill up with drbd state change messages for the xcp-persistent-ha-statefile.
If you're interested, I can gather up logs covering the day of the crashes or any other information you might want. Just let me know what you'd want for that and I'll be happy to collect it for you.
I've since disabled HA again and expect the servers will be as stable as they were before I enabled it (very stable from what I've seen so far except for the experiment with HA).
I suspect that HA will probably also work just fine as long as XOSTOR is not used for the heartbeat/metadata SR for it. This pool is also set up with a shared NFS v4 SR and I could experiment with using HA with that as the heartbeat SR but still using XOSTOR to house the VMs.
-
Hmm weird, I suppose this will be great to have more details for @ronan-a
In theory, we should have HA working with LINSOT.
-
@olivierlambert
No problem. Just let me know or have him let me know what he needs for information and I'll try to get it to you folks somehow.The actual crashes seem to be related to fencing of machines and recovery after I intentionally forced an outage on one of the test servers. My test was making sure that the LINSTOR controller service was running on the server acting as the pool master and hosting a couple of the several test VMs with HA enabled. Then I forced a failure by pulling its power cord.
The servers recovered on their own from that with one of the others in the pool taking over as pool master and that one and the other trying to restart failed VMs, mostly successfully. The unsuccessful startups were either due to a lack of RAM or due to not being able to find the VDI for the VM. The latter problem went away on its own and I suspect that was due to HA not waiting long enough for LINSTOR to straighten out the storage situation before trying to restart the VM.
After that I restarted the "failed" server and it came up, rejoined the pool, and I was able to start VMs on it. As far as I can see, it looks like the servers crashed and mostly recovered on their own shortly after that.
That was three days ago and the pool has been up and running since then without problems with HA enabled. I've since disabled it after looking at the logs associated with the crash and seeing that having HA running with it's heartbeat storage running on top of LINSTOR was causing the logs to fill up at a rate of several lines per second.
-
@ronan-a will come Monday and ask you questions probably
-
Or Wednesday in fact, but he'll answer very soon
-
@olivierlambert said in XOSTOR hyperconvergence preview:
shared=true device-config:provisioning=thin
Interesting... Thank you for this implementation, looks promising.
Out of curiosity, can we use this "technology" instead of XOA's CR job? So, for a single host setup, with 1 Local disk, and for example a second network iscsi/nas drive, can we use xostor to clone the data in the background?
This is more a backup solution rather then HA setup, can can assure up-to-date data in the iscsi device
-
@jeffberntsen Hello, so I'm available. ^^
If you're interested, I can gather up logs covering the day of the crashes or any other information you might want. Just let me know what you'd want for that and I'll be happy to collect it for you.
Yes! Could you send me your logs? (Old XCP-ng logs: xha.log, daemon.log, SMlog, kern.log..., and the logs of LINSTOR: /var/log/linstor-{controller/satellite})
The actual crashes seem to be related to fencing of machines and recovery after I intentionally forced an outage on one of the test servers. My test was making sure that the LINSTOR controller service was running on the server acting as the pool master and hosting a couple of the several test VMs with HA enabled. Then I forced a failure by pulling its power cord.
It can be a bad delay during the restart of the linstor-controller or a bad sync in the DRBD layer.
I have already observed a long delay with a similar test. But we would still have to check the logs.In this situation you can execute this command where the current linstor controller is running:
linstor resource list
. It's useful to check the current state.That was three days ago and the pool has been up and running since then without problems with HA enabled. I've since disabled it after looking at the logs associated with the crash and seeing that having HA running with it's heartbeat storage running on top of LINSTOR was causing the logs to fill up at a rate of several lines per second.
Yeah, we are aware of this problem. We have discussed with the linbit team to reduce the verbosity of the DRBD logs, and there is a new patch to test in the next CH release to compress log files more often. It would be interesting to reduce the space usage of
/var/log
. -
@ronan-a said in XOSTOR hyperconvergence preview:
@jeffberntsen Hello, so I'm available. ^^
If you're interested, I can gather up logs covering the day of the crashes or any other information you might want. Just let me know what you'd want for that and I'll be happy to collect it for you.
Yes! Could you send me your logs? (Old XCP-ng logs: xha.log, daemon.log, SMlog, kern.log..., and the logs of LINSTOR: /var/log/linstor-{controller/satellite})
Absolutely. I've grabbed all logs from the system including the XCP-ng crash log folder from the day I ran the test and a few days before and after. I've got .tar.gz files of the contents of the logs folders from each of the three servers in my test pool covering that period, about 250MB of compressed files total. What would be the best way to get them to you?
It can be a bad delay during the restart of the linstor-controller or a bad sync in the DRBD layer.
I have already observed a long delay with a similar test. But we would still have to check the logs.In this situation you can execute this command where the current linstor controller is running:
linstor resource list
. It's useful to check the current state.I did that after everything came back up on its own and that reported all resources as up and healthy.
Something I noticed is that the linstor command only works on the host running as linstor controller at the time as the cli is looking for the controller running on localhost.
I think the delay in my case was the controller coming back up on a different host. I didn't see any sign of a bad sync in DRBD. (I've used DRBD on and off quite a bit so have some experience with that but have very little with LINSTOR).
Yeah, we are aware of this problem. We have discussed with the linbit team to reduce the verbosity of the DRBD logs, and there is a new patch to test in the next CH release to compress log files more often. It would be interesting to reduce the space usage of
/var/log
.I'm pretty sure that's just related to HA from what I could see. You've obviously worked with it more than I have so please correct me if I'm wrong but it looks like LINSTOR tries to switch the active copy of the data to whichever system tries to write to the resource at the time and in HA, all of the servers in the pool are constantly trying to write to the HA metadata and heartbeat VDIs, driving LINSTOR crazy trying to keep up. As far as I can see, that doesn't happen with normal VM use because they're normally opened, read, and written by just one system at a time.
-
Absolutely. I've grabbed all logs from the system including the XCP-ng crash log folder from the day I ran the test and a few days before and after. I've got .tar.gz files of the contents of the logs folders from each of the three servers in my test pool covering that period, about 250MB of compressed files total. What would be the best way to get them to you?
You can upload it where you want. Then you can send me a private message with the download link. Thank you.
I did that after everything came back up on its own and that reported all resources as up and healthy.
Something I noticed is that the linstor command only works on the host running as linstor controller at the time as the cli is looking for the controller running on localhost.
You can use this command
linstor --controllers=<HOSTNAME_OR_IP> resource list
when you are on another host. Note: Thelinstor-controller
service is automatically started from a specific smapi daemon:minidrbdcluster
because we want to detect at any time host crash or reboot and start a new controller if necessary. Also the LINSTOR DB is shared using a VDI, so the controller service must always be executed by XCP-ng and not a user.I think the delay in my case was the controller coming back up on a different host. I didn't see any sign of a bad sync in DRBD. (I've used DRBD on and off quite a bit so have some experience with that but have very little with LINSTOR).
I'm pretty sure that's just related to HA from what I could see. You've obviously worked with it more than I have so please correct me if I'm wrong but it looks like LINSTOR tries to switch the active copy of the data to whichever system tries to write to the resource at the time and in HA, all of the servers in the pool are constantly trying to write to the HA metadata and heartbeat VDIs, driving LINSTOR crazy trying to keep up. As far as I can see, that doesn't happen with normal VM use because they're normally opened, read, and written by just one system at a time.Very good analysis on your part. Indeed this VDI is shared, and DRBD prevents us from opening it on several hosts at once. We haven't found a better solution than to open, write and close it for the moment. So it's why we must reduce the spam in the log files with few patches.
-
@ronan-a said in XOSTOR hyperconvergence preview:
You can upload it where you want. Then you can send me a private message with the download link. Thank you.
Done. Let me know if you have any problems getting to it.
You can use this command
linstor --controllers=<HOSTNAME_OR_IP> resource list
when you are on another host. Note: Thelinstor-controller
service is automatically started from a specific smapi daemon:minidrbdcluster
because we want to detect at any time host crash or reboot and start a new controller if necessary. Also the LINSTOR DB is shared using a VDI, so the controller service must always be executed by XCP-ng and not a user.A little reading around in the LINSTOR documentation eventually helped me out with this. It's possible to set up an
environment variable LS_CONTROLLERS with a list of possible
controller machines and thelinstor
CLI command will try all of the servers on the list until it finds the controller. On the three servers in my test pool, I can do something when I first get into a shell likeLS_CONTROLLERS=server1,server2,server3
and as long as the three server names can be resolved on all three hosts either via DNS or because they're in the /etc/hosts file, thelinstor
command works from any of them no matter which one is the controller. -
@olivierlambert said in XOSTOR hyperconvergence preview:
Play with it, snapshot, backup whatever What matters is resiliency.
More playing: I've just installed the latest set of updates as a rolling pool update and it handled things fine. No problems with XOSTOR shifting the VMs around during the update and no apparent problems afterward.
-
That's a great test indeed I have to say I'm impressed, maybe it's because I'm so used of corner cases I tested for month triggering various issues, but every time @ronan-a came with a solution. Kudos to him!
-
@olivierlambert This looks very promising. We're currently running K8s on top of XCP-ng hosts and deploying everything through XOA with terraform adapters. It's been working well for us, but we're not using a shared SR which we're looking into deploying. The nice thing is that it looks like we could actually use the LINTSTORE directly from K8s, removing a two storage layers completely (OpenEBS + soft RAID 5 local SR), and making the whole thing work even better for both XCP-ng and K8s.
I have a question before trying to deploy this - how would we go about changing the SR adapter in cases we need to add, remove or replace a XCP-ng host? Should we be able to change the SR configuration while it's active?
-
Likely a question for @ronan-a
edit: however, I'd love to have a chat with you to discuss your existing k8s workflow with XCP-ng/XOA!
-
I have a question before trying to deploy this - how would we go about changing the SR adapter in cases we need to add, remove or replace a XCP-ng host? Should we be able to change the SR configuration while it's active?
Well, a LINSTOR SR can be updated with new/deleted hosts. For the moment we don't have a script to simplify this usage, but with few linstor and smapi commands, you can do that.
-
@olivierlambert Feel free to email me at alexandre@floatplane.com.
@ronan-a I'll be deploying a test cluster this week and see if I can figure out the proper commands to perform those actions. Regarding linstor GUI, it seems like it's only supported on a controller, would that mean that I should install it on the cluster master DOM0?
-
Duly noted thanks