Idea/Dev test: HowTo use GlusterFS as SR on XCP-ng
-
@olivierlambert hmm.. hmmm... understand... so we need an unofficial wiki?
-
Or someone who quickly builds a package for gluster
-
@olivierlambert I moved it to the Concepts area of the wiki and added a clarification. Should be clear now.
-
(I just found my own warning in this thread https://xcp-ng.org/forum/topic/291/how-to-use-glusterfs-driver-with-xcp-ng-7-5/10)
So whats needed to build an official XCP-ng Gluster package? Or is someone in the process of buiding one?
The more I read about GlusterFS the more it attracts to me as a good stable solution I want to use.
-
The priority is to mastering SMAPIv3 first. Because this will be the foundation of all future storage integrations.
This is not a trivial task, but this is the main mission of @ronan-a
-
@olivierlambert ok, so I hold back
-
SMAPIv3 is full or surprises… Some of them are great (no need to modify statically some XAPI config files to add a new SR backend!!) but some aren't (perfs issues)
-
@olivierlambert @ronan-a How far are you from showing us the first implementation? Eager to try.
I thought perfs will be as they were.. what caused the drop? -
We got an
ext4-ng
driver already working. But benchmarks are… not good so far. So now the goal is to investigate to find the bottleneck. Oddly enough, even while using what we thing the same datapath than SMAPIv1 (yes you can do that), it's still slower.In the end, we must really understand exactly how this whole stuff works, so we could improve it ourselves (there is no public
master
branch for some repo, thanks Citrix…)edit: we are using the datapath coded by Citrix, so they should suffer the same problem than us for GFS2. I don't understand how this could be "production ready", but that's another story.
-
@olivierlambert just checking in, the wiki still says its not production ready...
@borzel how is your implementation performing over last few months...
-
Depends on what you expect about production ready. Eg this driver is used since 2y in XOSAN without issues. But we support XOSAN ourselves, not the driver alone.
-
@olivierlambert any update on this topic, almost over an year now so wondering, is it safe to use now?
Or I stumbled upon ceph implementation using SMAPIv3, I believe this a better option than glusterFS?
https://xcp-ng.org/forum/topic/1151/ceph-qemu-dp-in-xcp-ng-7-6/12
-
@geek-baba or a CephFS option!
-
@jmccoy555 that does not meet the need I have, I have a K8s cluster and everything works fine except some apps require block storage and should be fast, I have a rook-ceph cluster running in the k8s cluster and I was looking to move it out of that so I don't have to worry about it during k8s cluster upgrades/migrations.
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2015-July/002815.html
-
@geek-baba Fair enough, just another option. I don't like block storage, always favoured NFS over iSCSI, i think the speed trade off is worth the transparency.
-
@jmccoy555 I don't like block storage either, all my VMs are running off a NVMe NAS over 10Gig network, all my k8s app use dynamically provisioned NFS PV/PVC, unfortunately some of the apps today are not cloud native and need block storage for their inbuilt database. I looked at CephFS and it looks like another option to NFS though, will test the performance at some point...
-
@geek-baba Are you looking to expose RBD over iSCSI and using that as SR for your XCP-NG hosts? If you don't need SR, I think your rbd client in your APP VMs should work fine.
May be we are missing the larger picture of your implementation.
-
@geek-baba said in Idea/Dev test: HowTo use GlusterFS as SR on XCP-ng:
@olivierlambert any update on this topic, almost over an year now so wondering, is it safe to use now?
It was always "safe". Gluster packages are now available directly in XCP-ng 8.1
-
@r1 I have multiple debian vm's that are running as slave nodes for my k8s cluster, for most app the config and data is on a NFS share, very few app require a block storage due to the internal database locking requirements. Fundamentally apps can start on any of the k8s node when restarted, so if an app need access to block storage, it should be available to each node. Another requirement is that it needs to be as fast as possible, so a gluster or ceph running over SSD's attached would do the job. Currently I am running ceph-rook within k8s and migrating it is a headache and hence evaluating other options.
-
@olivierlambert thats great to hear! Is there a guide that I can look into?