Idea/Dev test: HowTo use GlusterFS as SR on XCP-ng
-
Hello, I did today a quick test with GlusterFS. Here is a quick HowTo: https://github.com/xcp-ng/xcp/wiki/GlusterFS
Please feel free to contribute! (to the wiki and here)
-
I'm not 100% it's enough to put a warning and have this in the official wiki. There is a lot of changes that could clashes with current code (eg with XAPI conf: with ext4 and xfs, we use an extra file outside it).
So this guide could potentially cause issue. Maybe using your own Gist until it's pushed in XCP-ng would be better?
-
@olivierlambert said in HowTo use GlusterFS as SR on XCP-ng:
until it's pushed in XCP-ng
What did you mean? Didn't get it
-
Until we get a proper Gluster package in XCP-ng. Adding a SR can be tricky, due to a lot of static values everywhere regarding SR names.
-
@olivierlambert hmm.. hmmm... understand... so we need an unofficial wiki?
-
Or someone who quickly builds a package for gluster
-
@olivierlambert I moved it to the Concepts area of the wiki and added a clarification. Should be clear now.
-
(I just found my own warning in this thread https://xcp-ng.org/forum/topic/291/how-to-use-glusterfs-driver-with-xcp-ng-7-5/10)
So whats needed to build an official XCP-ng Gluster package? Or is someone in the process of buiding one?
The more I read about GlusterFS the more it attracts to me as a good stable solution I want to use.
-
The priority is to mastering SMAPIv3 first. Because this will be the foundation of all future storage integrations.
This is not a trivial task, but this is the main mission of @ronan-a
-
@olivierlambert ok, so I hold back
-
SMAPIv3 is full or surprises⦠Some of them are great (no need to modify statically some XAPI config files to add a new SR backend!!) but some aren't (perfs issues)
-
@olivierlambert @ronan-a How far are you from showing us the first implementation? Eager to try.
I thought perfs will be as they were.. what caused the drop? -
We got an
ext4-ng
driver already working. But benchmarks are⦠not good so far. So now the goal is to investigate to find the bottleneck. Oddly enough, even while using what we thing the same datapath than SMAPIv1 (yes you can do that), it's still slower.In the end, we must really understand exactly how this whole stuff works, so we could improve it ourselves (there is no public
master
branch for some repo, thanks Citrixβ¦)edit: we are using the datapath coded by Citrix, so they should suffer the same problem than us for GFS2. I don't understand how this could be "production ready", but that's another story.
-
@olivierlambert just checking in, the wiki still says its not production ready...
@borzel how is your implementation performing over last few months...
-
Depends on what you expect about production ready. Eg this driver is used since 2y in XOSAN without issues. But we support XOSAN ourselves, not the driver alone.
-
@olivierlambert any update on this topic, almost over an year now so wondering, is it safe to use now?
Or I stumbled upon ceph implementation using SMAPIv3, I believe this a better option than glusterFS?
https://xcp-ng.org/forum/topic/1151/ceph-qemu-dp-in-xcp-ng-7-6/12
-
@geek-baba or a CephFS option!
-
@jmccoy555 that does not meet the need I have, I have a K8s cluster and everything works fine except some apps require block storage and should be fast, I have a rook-ceph cluster running in the k8s cluster and I was looking to move it out of that so I don't have to worry about it during k8s cluster upgrades/migrations.
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2015-July/002815.html
-
@geek-baba Fair enough, just another option. I don't like block storage, always favoured NFS over iSCSI, i think the speed trade off is worth the transparency.
-
@jmccoy555 I don't like block storage either, all my VMs are running off a NVMe NAS over 10Gig network, all my k8s app use dynamically provisioned NFS PV/PVC, unfortunately some of the apps today are not cloud native and need block storage for their inbuilt database. I looked at CephFS and it looks like another option to NFS though, will test the performance at some point...