XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    Idea/Dev test: HowTo use GlusterFS as SR on XCP-ng

    Scheduled Pinned Locked Moved Development
    35 Posts 5 Posters 8.0k Views 5 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • olivierlambertO Offline
      olivierlambert Vates 🪐 Co-Founder CEO
      last edited by

      SMAPIv3 is full or surprises… Some of them are great (no need to modify statically some XAPI config files to add a new SR backend!!) but some aren't (perfs issues)

      1 Reply Last reply Reply Quote 0
      • R Offline
        r1 XCP-ng Team
        last edited by r1

        @olivierlambert @ronan-a How far are you from showing us the first implementation? Eager to try.
        I thought perfs will be as they were.. what caused the drop?

        1 Reply Last reply Reply Quote 0
        • olivierlambertO Offline
          olivierlambert Vates 🪐 Co-Founder CEO
          last edited by olivierlambert

          We got an ext4-ng driver already working. But benchmarks are… not good so far. So now the goal is to investigate to find the bottleneck. Oddly enough, even while using what we thing the same datapath than SMAPIv1 (yes you can do that), it's still slower.

          In the end, we must really understand exactly how this whole stuff works, so we could improve it ourselves (there is no public master branch for some repo, thanks Citrix…)

          edit: we are using the datapath coded by Citrix, so they should suffer the same problem than us for GFS2. I don't understand how this could be "production ready", but that's another story.

          1 Reply Last reply Reply Quote 0
          • G Offline
            geek-baba
            last edited by

            @olivierlambert just checking in, the wiki still says its not production ready...

            @borzel how is your implementation performing over last few months...

            1 Reply Last reply Reply Quote 0
            • olivierlambertO Offline
              olivierlambert Vates 🪐 Co-Founder CEO
              last edited by

              Depends on what you expect about production ready. Eg this driver is used since 2y in XOSAN without issues. But we support XOSAN ourselves, not the driver alone.

              G 1 Reply Last reply Reply Quote 0
              • G Offline
                geek-baba @olivierlambert
                last edited by geek-baba

                @olivierlambert any update on this topic, almost over an year now so wondering, is it safe to use now?

                Or I stumbled upon ceph implementation using SMAPIv3, I believe this a better option than glusterFS?

                https://xcp-ng.org/forum/topic/1151/ceph-qemu-dp-in-xcp-ng-7-6/12

                J olivierlambertO 2 Replies Last reply Reply Quote 0
                • J Offline
                  jmccoy555 @geek-baba
                  last edited by

                  @geek-baba or a CephFS option!

                  G 1 Reply Last reply Reply Quote 0
                  • G Offline
                    geek-baba @jmccoy555
                    last edited by

                    @jmccoy555 that does not meet the need I have, I have a K8s cluster and everything works fine except some apps require block storage and should be fast, I have a rook-ceph cluster running in the k8s cluster and I was looking to move it out of that so I don't have to worry about it during k8s cluster upgrades/migrations.

                    http://lists.ceph.com/pipermail/ceph-users-ceph.com/2015-July/002815.html

                    J 1 Reply Last reply Reply Quote 0
                    • J Offline
                      jmccoy555 @geek-baba
                      last edited by

                      @geek-baba Fair enough, just another option. I don't like block storage, always favoured NFS over iSCSI, i think the speed trade off is worth the transparency.

                      G 1 Reply Last reply Reply Quote 0
                      • G Offline
                        geek-baba @jmccoy555
                        last edited by

                        @jmccoy555 I don't like block storage either, all my VMs are running off a NVMe NAS over 10Gig network, all my k8s app use dynamically provisioned NFS PV/PVC, unfortunately some of the apps today are not cloud native and need block storage for their inbuilt database. I looked at CephFS and it looks like another option to NFS though, will test the performance at some point...

                        1 Reply Last reply Reply Quote 0
                        • R Offline
                          r1 XCP-ng Team
                          last edited by

                          @geek-baba Are you looking to expose RBD over iSCSI and using that as SR for your XCP-NG hosts? If you don't need SR, I think your rbd client in your APP VMs should work fine.

                          May be we are missing the larger picture of your implementation.

                          G 1 Reply Last reply Reply Quote 0
                          • olivierlambertO Offline
                            olivierlambert Vates 🪐 Co-Founder CEO @geek-baba
                            last edited by

                            @geek-baba said in Idea/Dev test: HowTo use GlusterFS as SR on XCP-ng:

                            @olivierlambert any update on this topic, almost over an year now so wondering, is it safe to use now?

                            It was always "safe". Gluster packages are now available directly in XCP-ng 8.1

                            G 1 Reply Last reply Reply Quote 0
                            • G Offline
                              geek-baba @r1
                              last edited by

                              @r1 I have multiple debian vm's that are running as slave nodes for my k8s cluster, for most app the config and data is on a NFS share, very few app require a block storage due to the internal database locking requirements. Fundamentally apps can start on any of the k8s node when restarted, so if an app need access to block storage, it should be available to each node. Another requirement is that it needs to be as fast as possible, so a gluster or ceph running over SSD's attached would do the job. Currently I am running ceph-rook within k8s and migrating it is a headache and hence evaluating other options.

                              R 1 Reply Last reply Reply Quote 0
                              • G Offline
                                geek-baba @olivierlambert
                                last edited by

                                @olivierlambert thats great to hear! Is there a guide that I can look into?

                                1 Reply Last reply Reply Quote 0
                                • olivierlambertO Offline
                                  olivierlambert Vates 🪐 Co-Founder CEO
                                  last edited by

                                  Install Gluster client packages and create a SR based on Gluster type. Do you have any Gluster knowledge or you start from scratch?

                                  G 1 Reply Last reply Reply Quote 0
                                  • G Offline
                                    geek-baba @olivierlambert
                                    last edited by

                                    @olivierlambert I have been following it and installed it on k8s cluster using heketi, dont know much more than that, my goal is to create replica of each SSD exposed as SR to VM's.

                                    1 Reply Last reply Reply Quote 0
                                    • olivierlambertO Offline
                                      olivierlambert Vates 🪐 Co-Founder CEO
                                      last edited by

                                      I'm not sure to get it: do you have an external Gluster cluster setup outside your XCP-ng host? Or do you want to use each host local SSD as a Gluster resource? (hyperconvergence)

                                      G 1 Reply Last reply Reply Quote 0
                                      • G Offline
                                        geek-baba @olivierlambert
                                        last edited by

                                        @olivierlambert hyperconvergence - thats the goal.

                                        1 Reply Last reply Reply Quote 0
                                        • olivierlambertO Offline
                                          olivierlambert Vates 🪐 Co-Founder CEO
                                          last edited by

                                          I don't know your planning about servers (in VMs? in the dom0?).

                                          We are about to integrate Ceph and Gluster drivers soon in the official repo, so it will be even easier to setup.

                                          G 1 Reply Last reply Reply Quote 1
                                          • G Offline
                                            geek-baba @olivierlambert
                                            last edited by

                                            @olivierlambert in the VM's, and yes I am looking for something that is supported and do not break when upgrading, I have alternatives now that I can use but having a hyperconvergence storage like XOSAN is the best way to handle it.

                                            1 Reply Last reply Reply Quote 0
                                            • First post
                                              Last post