Idea/Dev test: HowTo use GlusterFS as SR on XCP-ng
-
@geek-baba Are you looking to expose RBD over iSCSI and using that as SR for your XCP-NG hosts? If you don't need SR, I think your rbd client in your APP VMs should work fine.
May be we are missing the larger picture of your implementation.
-
@geek-baba said in Idea/Dev test: HowTo use GlusterFS as SR on XCP-ng:
@olivierlambert any update on this topic, almost over an year now so wondering, is it safe to use now?
It was always "safe". Gluster packages are now available directly in XCP-ng 8.1
-
@r1 I have multiple debian vm's that are running as slave nodes for my k8s cluster, for most app the config and data is on a NFS share, very few app require a block storage due to the internal database locking requirements. Fundamentally apps can start on any of the k8s node when restarted, so if an app need access to block storage, it should be available to each node. Another requirement is that it needs to be as fast as possible, so a gluster or ceph running over SSD's attached would do the job. Currently I am running ceph-rook within k8s and migrating it is a headache and hence evaluating other options.
-
@olivierlambert thats great to hear! Is there a guide that I can look into?
-
Install Gluster client packages and create a SR based on Gluster type. Do you have any Gluster knowledge or you start from scratch?
-
@olivierlambert I have been following it and installed it on k8s cluster using heketi, dont know much more than that, my goal is to create replica of each SSD exposed as SR to VM's.
-
I'm not sure to get it: do you have an external Gluster cluster setup outside your XCP-ng host? Or do you want to use each host local SSD as a Gluster resource? (hyperconvergence)
-
@olivierlambert hyperconvergence - thats the goal.
-
I don't know your planning about servers (in VMs? in the
dom0
?).We are about to integrate Ceph and Gluster drivers soon in the official repo, so it will be even easier to setup.
-
@olivierlambert in the VM's, and yes I am looking for something that is supported and do not break when upgrading, I have alternatives now that I can use but having a hyperconvergence storage like XOSAN is the best way to handle it.
-
After the driver and yum packages will be integrated, an upgrade with
yum
will preserve everything. -
@geek-baba said in Idea/Dev test: HowTo use GlusterFS as SR on XCP-ng:
have multiple debian vm's that are running as slave nodes for my k8s cluster
So you have VMs on your compute hosts running your apps as well as storage? Did you pass-through your SSD disks to these VMs? If yes, then I think it's straightforward to install Gluster or Ceph client utilities on Dom0 and then attach SR created in those VMs.
We will let you know once these are yum installable. BTW, you could also have your Gluster or Ceph cluster out of your compute hosts and use it with those utilities.
-
@r1 I am not sure if I understand the question but let me explain, how its setup today:
- 4 bare metal servers running 4 xcp-ng in a pool. all of them have boot ssd (sda) and a data ssd (sdb). It has a NVMe NAS attached as NFS SR.
- 10 Debian 9 images are created and hosted on NFS SR, they are running as Kubernetes (K8s) master or salve node.
- Most K8s deployment or stateful set are using the NFS SR for all the persistent data that is needed to run the app and that in turn makes the actual app ephemeral or transient and can be created on any Debian host (that is running k8s master node) depending upon the load and the need. This is the optimal state and I declare all these apps 99% cloud native.
- The issue starts to occur when few apps even though have docker containers, like to use block storage as the internal db that is being used require locking mechanism that can not be achieved with NFS.
- To over come the issue, I expose the data ssd (sdb) to all the Debian hosts and then run ceph from the K8s cluster as local storage class, so in this setup, most of the data is on NFS/NAS but certain component that requires block storage is mounted on the ceph cluster. This where its start to get complicated and I wanted to evaluate if it could be done at the xcp-ng level - meaning 4 ssd that is attached to the xcp-ng host that are replicated with each other and presented to the hosts as single SR.
I hope I was able to explain what I was trying to achieve but its never too simple lol
-
@geek-baba That was easy Thanks for laying out.
So when you run
rook-ceph
and create a SC for your PODs, why do you need a Ceph SR on XCP-NG host? Your PV will be using rbd anyway and most likely host won't come into picture.Are you creating local ext SR on data ssd (sdb) or passing it as a raw disk to VM?
-
@r1 the way the rook-ceph works that you pass raw disk to the Debian host and then it creates the OSD and cluster, I have few issues with this:
- Adding and removing k8s nodes is just few commands and very quick and zero downtime for apps, but if I am running rook-ceph, I need to rebuild the cluster every time I do that and thats extremely time consuming and extra over head.
- The size of each VM that is being backed up daily/weekly is also extremely big and consumes a lot of resources and storage space.
Instead if you get a xcp-ng native local cluster that can be used by VM's as local block type SR that would simplify the implementation many times.