XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    XOSTOR hyperconvergence preview

    Scheduled Pinned Locked Moved XOSTOR
    446 Posts 47 Posters 501.8k Views 48 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • furyflash777F Offline
      furyflash777 @Swen
      last edited by furyflash777

      @Swen said in XOSTOR hyperconvergence preview:

      @andersonalipio said in XOSTOR hyperconvergence preview:

      Hello all,

      I got it working just fine so far on all lab tests, but one thing a couldn't find in here or other post related to using a dedicated storage network other then managemente network. Can it be modified? In this lab, we have 2 hosts with 2 network cards each, one for mgmt and vm external traffic, and the second should be exclusive for storage, since is way faster.

      We are using a separate network in our lab. What we do is this:

      1. get the node list from the running controller via
      linstor node list
      
      1. take a look at the node interface list via
      linstor node interface list <node name>
      
      1. modify each nodes interface via
      linstor node interface modify <node name> default --ip <ip>
      
      1. check addresses via this
      linstor node list
      

      Hope that helps!

      Another option:

      1. Create additional interface
      linstor node interface create <node name> storage_nic <ip>
      
      1. Set preferred interface for each node
      linstor node set-property <node name> PrefNic storage_nic
      
      1 Reply Last reply Reply Quote 0
      • B Offline
        BenHuzo @olivierlambert
        last edited by

        @olivierlambert any update on this? thank you

        1 Reply Last reply Reply Quote 0
        • olivierlambertO Offline
          olivierlambert Vates 🪐 Co-Founder CEO
          last edited by

          We started to work on the initial UI 🙂 The CLI works pretty well now, so almost there 🙂 We can make you a demo install inside your infrastructure if you want.

          B 1 Reply Last reply Reply Quote 0
          • B Offline
            BenHuzo @olivierlambert
            last edited by

            @olivierlambert Thank you, I have many questions - is there a call/demo you could do?

            1 Reply Last reply Reply Quote 0
            • olivierlambertO Offline
              olivierlambert Vates 🪐 Co-Founder CEO
              last edited by

              Go there and ask for a preview access on your hardware: https://vates.tech/contact/

              B 1 Reply Last reply Reply Quote 0
              • B Offline
                BenHuzo @olivierlambert
                last edited by

                @olivierlambert Thank you for pointing that direction! I went ahead and made a request.

                1 Reply Last reply Reply Quote 0
                • J Offline
                  JensH
                  last edited by JensH

                  I am working for years with XenServer/Citrix Hypervisor and Citrix products like Virtual Apps.
                  Meanwhile I also have XCP-NG running on an test server for a while.
                  Well, I decided now to build a new small cluster with XCP-NG. One reason is also the XOSTOR option.

                  This new pool is planned with 3 nodes and multiple SSD disks (not yet NVMe) in each host.
                  I am wondering how XOSTOR creates the LV on a VG with let's say 4 physical drives:
                  Will it be a linear LV? Is there any option for striping or other raid levels available/planned?

                  Looking forward to your reply.
                  Thanks a lot for all the good work in a challenging environment.

                  1 Reply Last reply Reply Quote 0
                  • olivierlambertO Offline
                    olivierlambert Vates 🪐 Co-Founder CEO
                    last edited by

                    We don't need/want to have RAID levels or things like this, since it's already replicated to other hosts, this will make it too redundant. So it will be like a linear LV, yes 🙂

                    J 1 Reply Last reply Reply Quote 0
                    • J Offline
                      JensH @olivierlambert
                      last edited by

                      @olivierlambert thank you for the quick answer.
                      To be on the real safe side this means then a replication count not lower than 3 would be useful (from my perspective).

                      What would happen if a node of a 3 node cluster with replication count 3 (so all nodes have a copy) fails?
                      Would everything stop because replication count is higher than available nodes?
                      (I refer to post https://xcp-ng.org/forum/post/54086)

                      ronan-aR 1 Reply Last reply Reply Quote 0
                      • ronan-aR Offline
                        ronan-a Vates 🪐 XCP-ng Team @JensH
                        last edited by ronan-a

                        @JensH No. You can continue to use your pool. New resources can still be created and LINSTOR can sync volumes when the connection to the lost node is recreated.

                        As long as there is no split brain, and you have 3 hosts online, it's ok, that's why we recommend using 4 machines.
                        With a pool of 3 machines, and if you lose a node, you increase the risk of split brain on a resource but you can continue to create and use them.

                        1 Reply Last reply Reply Quote 0
                        • olivierlambertO Offline
                          olivierlambert Vates 🪐 Co-Founder CEO
                          last edited by

                          Also, keep in mind the LINSTOR put things in read only as soon you are under your replication target.

                          It means, on a 3 hosts scenario:

                          • if you have a replication 3, any host that is unreachable will trigger read only on the 2 others
                          • if you have a replication 2, you can lose one host without any consequence

                          So for 3 machines, replication 2 is a sweet spot in terms of availability.

                          1 Reply Last reply Reply Quote 0
                          • W Offline
                            Wilken
                            last edited by

                            Hi,

                            I've run the install script on a XCP-ng 8.2.1 host. The output of the following command:

                            rpm -qa | grep -E "^(sm|xha)-.linstor."

                            sm-2.30.8-2.1.0.linstor.5.xcpng8.2.x86_64

                            xha-10.1.0-2.2.0.linstor.1.xcpng8.2.x86_64

                            is missing, because it is already installed in version:

                            xha-10.1.0-2.1.xcpng8.2.x86_64

                            from XCP-ng itself.

                            Is this packace still needed from the linstor repo?
                            Should I uninstall it an re-run the install script?

                            BR,
                            Wilken

                            ronan-aR 1 Reply Last reply Reply Quote 0
                            • olivierlambertO Offline
                              olivierlambert Vates 🪐 Co-Founder CEO
                              last edited by

                              question for @ronan-a

                              1 Reply Last reply Reply Quote 0
                              • ronan-aR Offline
                                ronan-a Vates 🪐 XCP-ng Team @Wilken
                                last edited by

                                @Wilken The modified version of the xha package is no longer needed. You can use the latest version without the linstor tag.

                                It's not necessary to reinstall your XOSTOR SR.

                                1 Reply Last reply Reply Quote 0
                                • W Offline
                                  Wilken
                                  last edited by

                                  Thank you @olivierlambert and @ronan-a for the quick answer and clarification!

                                  BR,
                                  Wilken

                                  1 Reply Last reply Reply Quote 0
                                  • AtaxyaNetworkA AtaxyaNetwork referenced this topic on
                                  • G Offline
                                    gb.123
                                    last edited by

                                    @ronan-a

                                    Hi !
                                    Before I test this, I have a small question:
                                    If the VM is encrypted, and XOSTOR SR is enabled, is the VM + Memory replicated or just the VDI ?
                                    Once the 1st node is down, will the 2nd node take over as-is or will the 2nd node go to 'boot' stage where is asks for decryption password ?

                                    Thanks

                                    ronan-aR 1 Reply Last reply Reply Quote 0
                                    • ronan-aR Offline
                                      ronan-a Vates 🪐 XCP-ng Team @gb.123
                                      last edited by

                                      @gb-123 How the VM is encrypted? Only the VDIs are replicated.

                                      G 1 Reply Last reply Reply Quote 0
                                      • G Offline
                                        gb.123 @ronan-a
                                        last edited by gb.123

                                        @ronan-a

                                        VMs would be using LUKS encryption.

                                        So if only VDI is replicated and hypothetically, if I loose the master node or any other node actually having the VM, then I will have to create the VM again using the replicated disk? Or would it be something like DRBD where there are actually 2 VMs running in Active/Passive mode and there is an automatic switchover ? Or would it be that One VM is running and the second gets automatically started when 1st is down ?

                                        Sorry for the noob questions. I just wanted to be sure of the implementation.

                                        Maelstrom96M 1 Reply Last reply Reply Quote 0
                                        • Maelstrom96M Offline
                                          Maelstrom96 @gb.123
                                          last edited by

                                          @gb-123 said in XOSTOR hyperconvergence preview:

                                          @ronan-a

                                          VMs would be using LUKS encryption.

                                          So if only VDI is replicated and hypothetically, if I loose the master node or any other node actually having the VM, then I will have to create the VM again using the replicated disk? Or would it be something like DRBD where there are actually 2 VMs running in Active/Passive mode and there is an automatic switchover ? Or would it be that One VM is running and the second gets automatically started when 1st is down ?

                                          Sorry for the noob questions. I just wanted to be sure of the implementation.

                                          The VM metadata is at the pool level, meaning that you wouldn't have to re-create the VM if the current VM host has a failure. However, memory can't/isn't replicated in the cluster, unless you're doing a live migration which would temporarily replicate the VM memory to the new host, so it can be moved.

                                          DRBD only replicates the VDI, or in other terms, the disk data across the active Linstor members. If the VM is stopped or is terminated because of host failure, you should be able to start it back up on another host in your pool, but by default, this will require manual intervention to start the VM, and will require you to input your encryption password since it will be a cold boot.

                                          If you want the VM to automatically self-start in case of failure, you can use the HA feature of XCP-ng. This wouldn't solve your issue with having to input your encryption password since, like explain earlier, the memory isn't replicated, and it would cold boot from the replicated VDI. Also, keep in mind that enabling HA adds maintenance complexity and might not be worth it.

                                          G 1 Reply Last reply Reply Quote 3
                                          • G Offline
                                            gb.123 @Maelstrom96
                                            last edited by

                                            @Maelstrom96

                                            Thanks for your clarification !
                                            I was thinking of testing HA with XOSTOR (If at all that is possible). XOSTOR would also be treated as 'Shared SR' I guess ?

                                            ronan-aR 1 Reply Last reply Reply Quote 0
                                            • First post
                                              Last post