Idea/Dev test: HowTo use GlusterFS as SR on XCP-ng

  • @olivierlambert thats great to hear! Is there a guide that I can look into?

  • XCP-ng Team

    Install Gluster client packages and create a SR based on Gluster type. Do you have any Gluster knowledge or you start from scratch?

  • @olivierlambert I have been following it and installed it on k8s cluster using heketi, dont know much more than that, my goal is to create replica of each SSD exposed as SR to VM's.

  • XCP-ng Team

    I'm not sure to get it: do you have an external Gluster cluster setup outside your XCP-ng host? Or do you want to use each host local SSD as a Gluster resource? (hyperconvergence)

  • @olivierlambert hyperconvergence - thats the goal.

  • XCP-ng Team

    I don't know your planning about servers (in VMs? in the dom0?).

    We are about to integrate Ceph and Gluster drivers soon in the official repo, so it will be even easier to setup.

  • @olivierlambert in the VM's, and yes I am looking for something that is supported and do not break when upgrading, I have alternatives now that I can use but having a hyperconvergence storage like XOSAN is the best way to handle it.

  • XCP-ng Team

    After the driver and yum packages will be integrated, an upgrade with yum will preserve everything.

  • XCP-ng Team

    @geek-baba said in Idea/Dev test: HowTo use GlusterFS as SR on XCP-ng:

    have multiple debian vm's that are running as slave nodes for my k8s cluster

    So you have VMs on your compute hosts running your apps as well as storage? Did you pass-through your SSD disks to these VMs? If yes, then I think it's straightforward to install Gluster or Ceph client utilities on Dom0 and then attach SR created in those VMs.

    We will let you know once these are yum installable. BTW, you could also have your Gluster or Ceph cluster out of your compute hosts and use it with those utilities.

  • @r1 I am not sure if I understand the question but let me explain, how its setup today:

    1. 4 bare metal servers running 4 xcp-ng in a pool. all of them have boot ssd (sda) and a data ssd (sdb). It has a NVMe NAS attached as NFS SR.
    2. 10 Debian 9 images are created and hosted on NFS SR, they are running as Kubernetes (K8s) master or salve node.
    3. Most K8s deployment or stateful set are using the NFS SR for all the persistent data that is needed to run the app and that in turn makes the actual app ephemeral or transient and can be created on any Debian host (that is running k8s master node) depending upon the load and the need. This is the optimal state and I declare all these apps 99% cloud native.
    4. The issue starts to occur when few apps even though have docker containers, like to use block storage as the internal db that is being used require locking mechanism that can not be achieved with NFS.
    5. To over come the issue, I expose the data ssd (sdb) to all the Debian hosts and then run ceph from the K8s cluster as local storage class, so in this setup, most of the data is on NFS/NAS but certain component that requires block storage is mounted on the ceph cluster. This where its start to get complicated and I wanted to evaluate if it could be done at the xcp-ng level - meaning 4 ssd that is attached to the xcp-ng host that are replicated with each other and presented to the hosts as single SR.

    I hope I was able to explain what I was trying to achieve but its never too simple lol

  • XCP-ng Team

    @geek-baba That was easy 🙂 Thanks for laying out.

    So when you run rook-ceph and create a SC for your PODs, why do you need a Ceph SR on XCP-NG host? Your PV will be using rbd anyway and most likely host won't come into picture.

    Are you creating local ext SR on data ssd (sdb) or passing it as a raw disk to VM?

  • @r1 the way the rook-ceph works that you pass raw disk to the Debian host and then it creates the OSD and cluster, I have few issues with this:

    1. Adding and removing k8s nodes is just few commands and very quick and zero downtime for apps, but if I am running rook-ceph, I need to rebuild the cluster every time I do that and thats extremely time consuming and extra over head.
    2. The size of each VM that is being backed up daily/weekly is also extremely big and consumes a lot of resources and storage space.

    Instead if you get a xcp-ng native local cluster that can be used by VM's as local block type SR that would simplify the implementation many times.

Log in to reply

XCP-ng Pro Support

XCP-ng Pro Support