XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    XOSTOR hyperconvergence preview

    Scheduled Pinned Locked Moved XOSTOR
    446 Posts 47 Posters 481.1k Views 48 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • ronan-aR Offline
      ronan-a Vates πŸͺ XCP-ng Team @Swen
      last edited by

      @Swen I suppose that can work with our driver. Unfortunately I haven't tested it.
      It can be useful to use it, however we would have to see what impact it has on the DRBD network: for example in a bad case where we would have a chain of diskless VHDs suddenly activated on a host πŸ™‚

      1 Reply Last reply Reply Quote 0
      • furyflash777F Offline
        furyflash777 @olivierlambert
        last edited by

        @olivierlambert
        Hi,

        1. is it possible to create more then one linstor SR in pool?

        I have this error:
        Error code: SR_BACKEND_FAILURE_5006
        Error parameters: , LINSTOR SR creation error [opterr=LINSTOR SR must be unique in a pool],

        1. Also, is it possible to have hybrid volume ssd+hdd (cache/autotiring) ?
        ronan-aR 1 Reply Last reply Reply Quote 0
        • ronan-aR Offline
          ronan-a Vates πŸͺ XCP-ng Team @furyflash777
          last edited by

          @furyflash777 You can only use one LINSTOR SR per pool due to a limitation to share the DRBD volume database (and to start the LINSTOR controller). Why do you want many SRs?

          furyflash777F 1 Reply Last reply Reply Quote 0
          • furyflash777F Offline
            furyflash777 @ronan-a
            last edited by

            @ronan-a

            hosts previous was used with VMware VSAN,
            I have nvme and HDD disks in the same hosts.

            I am new in LINSTOR/DRBD/XOSTOR.

            What will be better, use LVM on top of mdm software raid or not?

            If a disk fails on one of the servers, will there be an automatic failover to the remaining replica?
            Will there be a break in this case?
            Will it start writing a new replica to the remaining host?

            ronan-aR 1 Reply Last reply Reply Quote 0
            • ronan-aR Offline
              ronan-a Vates πŸͺ XCP-ng Team @furyflash777
              last edited by

              @furyflash777 I don't recommend mixing HDDs and SSDs with our driver.

              Because the replication is managed by LINSTOR, try to not use RAID. A LINSTOR replication of 3 is robust, so if you have many disks on each machine, you can aggregate them into a linear volume.

              If a disk fails on one of the servers, will there be an automatic failover to the remaining replica?

              As long as there is no split brain, or data corruption, you can automatically use a copy of a VDI. πŸ˜‰

              Will it start writing a new replica to the remaining host?

              It depends on the used policy but yes it is possible.

              1 Reply Last reply Reply Quote 0
              • L Offline
                learningdaily
                last edited by

                @ronan-a - I've been lurking on this Forum Subject for too long, and I've finally implemented the scripts across three of my hosts, and also added the "Storage" network modifications explained by @Swen and it is working beautifully. Failover is handled by XCP-ng bonded networking if a switch fails, hosts can reboot without any loss in speed or data.

                You may recall several years ago I was interested in seeing CEPH implemented natively, but your LINSTOR implementation is so much simpler to manage. Thanks and keep up the good work.

                1 Reply Last reply Reply Quote 2
                • B Offline
                  BenHuzo
                  last edited by

                  I've also been looking at this thread for a while, I noticed there was an impending launch of an RC version of this. I am actively looking for a hyperconverged solution for the corp I am engaged with. I am looking to get off a SPOF san into a multi-node cluster. Our corp is looking to implement this change very soon (couple months) regardless of what use - but after much research this seems highly anticipated and exactly what im looking for... thank you!!

                  1 Reply Last reply Reply Quote 0
                  • furyflash777F Offline
                    furyflash777 @Swen
                    last edited by furyflash777

                    @Swen said in XOSTOR hyperconvergence preview:

                    @andersonalipio said in XOSTOR hyperconvergence preview:

                    Hello all,

                    I got it working just fine so far on all lab tests, but one thing a couldn't find in here or other post related to using a dedicated storage network other then managemente network. Can it be modified? In this lab, we have 2 hosts with 2 network cards each, one for mgmt and vm external traffic, and the second should be exclusive for storage, since is way faster.

                    We are using a separate network in our lab. What we do is this:

                    1. get the node list from the running controller via
                    linstor node list
                    
                    1. take a look at the node interface list via
                    linstor node interface list <node name>
                    
                    1. modify each nodes interface via
                    linstor node interface modify <node name> default --ip <ip>
                    
                    1. check addresses via this
                    linstor node list
                    

                    Hope that helps!

                    Another option:

                    1. Create additional interface
                    linstor node interface create <node name> storage_nic <ip>
                    
                    1. Set preferred interface for each node
                    linstor node set-property <node name> PrefNic storage_nic
                    
                    1 Reply Last reply Reply Quote 0
                    • B Offline
                      BenHuzo @olivierlambert
                      last edited by

                      @olivierlambert any update on this? thank you

                      1 Reply Last reply Reply Quote 0
                      • olivierlambertO Offline
                        olivierlambert Vates πŸͺ Co-Founder CEO
                        last edited by

                        We started to work on the initial UI πŸ™‚ The CLI works pretty well now, so almost there πŸ™‚ We can make you a demo install inside your infrastructure if you want.

                        B 1 Reply Last reply Reply Quote 0
                        • B Offline
                          BenHuzo @olivierlambert
                          last edited by

                          @olivierlambert Thank you, I have many questions - is there a call/demo you could do?

                          1 Reply Last reply Reply Quote 0
                          • olivierlambertO Offline
                            olivierlambert Vates πŸͺ Co-Founder CEO
                            last edited by

                            Go there and ask for a preview access on your hardware: https://vates.tech/contact/

                            B 1 Reply Last reply Reply Quote 0
                            • B Offline
                              BenHuzo @olivierlambert
                              last edited by

                              @olivierlambert Thank you for pointing that direction! I went ahead and made a request.

                              1 Reply Last reply Reply Quote 0
                              • J Offline
                                JensH
                                last edited by JensH

                                I am working for years with XenServer/Citrix Hypervisor and Citrix products like Virtual Apps.
                                Meanwhile I also have XCP-NG running on an test server for a while.
                                Well, I decided now to build a new small cluster with XCP-NG. One reason is also the XOSTOR option.

                                This new pool is planned with 3 nodes and multiple SSD disks (not yet NVMe) in each host.
                                I am wondering how XOSTOR creates the LV on a VG with let's say 4 physical drives:
                                Will it be a linear LV? Is there any option for striping or other raid levels available/planned?

                                Looking forward to your reply.
                                Thanks a lot for all the good work in a challenging environment.

                                1 Reply Last reply Reply Quote 0
                                • olivierlambertO Offline
                                  olivierlambert Vates πŸͺ Co-Founder CEO
                                  last edited by

                                  We don't need/want to have RAID levels or things like this, since it's already replicated to other hosts, this will make it too redundant. So it will be like a linear LV, yes πŸ™‚

                                  J 1 Reply Last reply Reply Quote 0
                                  • J Offline
                                    JensH @olivierlambert
                                    last edited by

                                    @olivierlambert thank you for the quick answer.
                                    To be on the real safe side this means then a replication count not lower than 3 would be useful (from my perspective).

                                    What would happen if a node of a 3 node cluster with replication count 3 (so all nodes have a copy) fails?
                                    Would everything stop because replication count is higher than available nodes?
                                    (I refer to post https://xcp-ng.org/forum/post/54086)

                                    ronan-aR 1 Reply Last reply Reply Quote 0
                                    • ronan-aR Offline
                                      ronan-a Vates πŸͺ XCP-ng Team @JensH
                                      last edited by ronan-a

                                      @JensH No. You can continue to use your pool. New resources can still be created and LINSTOR can sync volumes when the connection to the lost node is recreated.

                                      As long as there is no split brain, and you have 3 hosts online, it's ok, that's why we recommend using 4 machines.
                                      With a pool of 3 machines, and if you lose a node, you increase the risk of split brain on a resource but you can continue to create and use them.

                                      1 Reply Last reply Reply Quote 0
                                      • olivierlambertO Offline
                                        olivierlambert Vates πŸͺ Co-Founder CEO
                                        last edited by

                                        Also, keep in mind the LINSTOR put things in read only as soon you are under your replication target.

                                        It means, on a 3 hosts scenario:

                                        • if you have a replication 3, any host that is unreachable will trigger read only on the 2 others
                                        • if you have a replication 2, you can lose one host without any consequence

                                        So for 3 machines, replication 2 is a sweet spot in terms of availability.

                                        1 Reply Last reply Reply Quote 0
                                        • W Offline
                                          Wilken
                                          last edited by

                                          Hi,

                                          I've run the install script on a XCP-ng 8.2.1 host. The output of the following command:

                                          rpm -qa | grep -E "^(sm|xha)-.linstor."

                                          sm-2.30.8-2.1.0.linstor.5.xcpng8.2.x86_64

                                          xha-10.1.0-2.2.0.linstor.1.xcpng8.2.x86_64

                                          is missing, because it is already installed in version:

                                          xha-10.1.0-2.1.xcpng8.2.x86_64

                                          from XCP-ng itself.

                                          Is this packace still needed from the linstor repo?
                                          Should I uninstall it an re-run the install script?

                                          BR,
                                          Wilken

                                          ronan-aR 1 Reply Last reply Reply Quote 0
                                          • olivierlambertO Offline
                                            olivierlambert Vates πŸͺ Co-Founder CEO
                                            last edited by

                                            question for @ronan-a

                                            1 Reply Last reply Reply Quote 0
                                            • First post
                                              Last post