XOSTOR and mdadm software RAID
-
I am using a dual server cluster with both 10x 4TB SSD storage and using mdadm to create a RAID10 volume on top of this where my virtual machines are running of.
Now I am interested in using XOSTOR, by adding one more host. Is it possible to still use mdadm software raid with this solution?
-
-
I dont think you'll be able to use the disks that are a part of the RAID10 mdadm volume.
You'd be better of using specific disks just for XOSTOR. -
@OhSoNoob I don't see a good reason to continue using RAID10 below DRBD. Your disks will be best used in a linear LVM volume.
-
Going by the documentation didn’t indicate clear to me that it would create an LVM extend (so like a RAID0 or JBOD) and then now I understand redundancy is achieved using the replication.
Thanks!
With two diskfull nodes, would it be recommended to use a factor of 3 replication? Meaning data is on 3 places. And does this guarantee me that data is on both servers, so if one server is struck by lightning the other can HA recover the virtual machines without losing any other data than RAM? And what if in this scenario als one drive of the only running server dies?
-
@OhSoNoob I've used XOSTOR on top of MDRAID and it seemed to work well for me during my testing. I ran tests of it on top of MD RAID 1, 5, and 10 (MDRAID's "RAID 10" which isn't really RAID 10) and had good luck with it. The XOSTOR is really adding a second layer of redundancy at that point, similar to MDRAID 5+1 builds so is almost overkill. Almost.
Where I see the most benefit from XOSTOR on MDRAID would be on top of RAID 10 or RAID 0 arrays. Depending on the speed of your drives, you might get some benefit from the increased read speed (and read/write speed for RAID 0). In addition, RAID 10 would give you some additional redundancy so that losing a drive wouldn't mean the loss of that node for XOSTOR's purposes, possibly making recovery easier.
The ability for some redundancy might also be useful for a stretched cluster or some other situation where your network links between XOSTOR nodes isn't as fast as it should be; The ability to recover at the RAID level might be much faster than recovering or rebuilding an entire node over a slow link.
@ronan-a, I'm not sure if you remember, but the very first test of XOSTOR I ran, shortly after it was introduced,, were on top of RAID 10 arrays. I kept that test cluster alive and running until equipment failure (failed motherboards, nothing related to XOSTOR or MDRAID) forced me to scrap it. I had similar teething pains to others while XOSTOR was being developed and debugged during the test phase, but nothing related to running on top of MDRAID as far as I could tell.