XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    XOSTOR hyperconvergence preview

    Scheduled Pinned Locked Moved XOSTOR
    446 Posts 47 Posters 481.3k Views 48 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • SwenS Offline
      Swen @ronan-a
      last edited by

      @ronan-a If you want me to test some of your fixed, please don't hesitate.

      F 1 Reply Last reply Reply Quote 1
      • F Offline
        fred974 @Swen
        last edited by

        @ronan-a I just had a problem where I cannot deploy the XO Kubernetes recipe on our XOSTOR SR. It work with no problem on local storage. Could you please test if you are facing the same issue as us or if the issue is isolated to me?
        Here is the post that make me realised the issue is with XOSTOR

        1 Reply Last reply Reply Quote 0
        • A Offline
          andersonalipio
          last edited by

          Hello all,

          I got it working just fine so far on all lab tests, but one thing a couldn't find in here or other post related to using a dedicated storage network other then managemente network. Can it be modified? In this lab, we have 2 hosts with 2 network cards each, one for mgmt and vm external traffic, and the second should be exclusive for storage, since is way faster.

          SwenS 1 Reply Last reply Reply Quote 0
          • SwenS Offline
            Swen @andersonalipio
            last edited by

            @andersonalipio said in XOSTOR hyperconvergence preview:

            Hello all,

            I got it working just fine so far on all lab tests, but one thing a couldn't find in here or other post related to using a dedicated storage network other then managemente network. Can it be modified? In this lab, we have 2 hosts with 2 network cards each, one for mgmt and vm external traffic, and the second should be exclusive for storage, since is way faster.

            We are using a separate network in our lab. What we do is this:

            1. get the node list from the running controller via
            linstor node list
            
            1. take a look at the node interface list via
            linstor node interface list <node name>
            
            1. modify each nodes interface via
            linstor node interface modify <node name> default --ip <ip>
            
            1. check addresses via this
            linstor node list
            

            Hope that helps!

            A furyflash777F 2 Replies Last reply Reply Quote 0
            • A Offline
              andersonalipio @Swen
              last edited by andersonalipio

              @Swen Thanks bud! It did the trick!

              I did the interface modify commands on master only, it changed all hosts online with running guest vms and no downtime at all!

              1 Reply Last reply Reply Quote 0
              • brodiecyberB brodiecyber referenced this topic on
              • TheiLLeniumStudiosT Offline
                TheiLLeniumStudios
                last edited by

                Is it possible to only use 2 hosts for XOSTOR?

                SwenS 1 Reply Last reply Reply Quote 0
                • SwenS Offline
                  Swen @TheiLLeniumStudios
                  last edited by

                  @TheiLLeniumStudios It should be possible, but not recommended. You can end up in a split-brain-scenario.

                  1 Reply Last reply Reply Quote 0
                  • olivierlambertO Offline
                    olivierlambert Vates πŸͺ Co-Founder CEO
                    last edited by

                    We do not want to support 2 hosts for now. In theory, you can work if you add a 3rd machine acting as a "Tie Breaker", but it's more complex to setup. However, for a home lab, that should be doable πŸ™‚

                    1 Reply Last reply Reply Quote 0
                    • SwenS Offline
                      Swen
                      last edited by

                      @olivierlambert can you please provide an update or better a roadmap regarding the implementation of linstor in xcp-ng? I find it hard to understand in which status this project is at the moment. As you know we are really looking forward to use it in production with our Cloudstack installation. Thx for any news. πŸ™‚

                      1 Reply Last reply Reply Quote 0
                      • olivierlambertO Offline
                        olivierlambert Vates πŸͺ Co-Founder CEO
                        last edited by

                        We are close to a first release (at least a RC). That will be CLI-only, but we already have plans to replace the XOSAN UI in Xen Orchestra by XOSTOR πŸ™‚

                        SwenS 1 Reply Last reply Reply Quote 1
                        • SwenS Offline
                          Swen @olivierlambert
                          last edited by

                          @olivierlambert thx for the quick reply! πŸ™‚ Does close mean days, weeks or month? πŸ˜‰

                          1 Reply Last reply Reply Quote 0
                          • olivierlambertO Offline
                            olivierlambert Vates πŸͺ Co-Founder CEO
                            last edited by

                            RC weeks I think

                            N B 2 Replies Last reply Reply Quote 1
                            • N Offline
                              niko7 @olivierlambert
                              last edited by

                              @olivierlambert will XOSTOR support deduplication?

                              1 Reply Last reply Reply Quote 0
                              • olivierlambertO Offline
                                olivierlambert Vates πŸͺ Co-Founder CEO
                                last edited by

                                Not yet, but I think I've read LINSTOR supports VDO, so it's possible in a future addition πŸ™‚

                                1 Reply Last reply Reply Quote 0
                                • SwenS Offline
                                  Swen
                                  last edited by

                                  @ronan-a did you test some linstor vars like:
                                  DrbdOptions/auto-diskful': Makes a resource diskful if it was continuously diskless primary for X minutes
                                  'DrbdOptions/auto-diskful-allow-cleanup': Allows this resource to be cleaned up after toggle-disk + resync is finished

                                  Thx for your feedback!

                                  ronan-aR 1 Reply Last reply Reply Quote 0
                                  • ronan-aR Offline
                                    ronan-a Vates πŸͺ XCP-ng Team @Swen
                                    last edited by

                                    @Swen I suppose that can work with our driver. Unfortunately I haven't tested it.
                                    It can be useful to use it, however we would have to see what impact it has on the DRBD network: for example in a bad case where we would have a chain of diskless VHDs suddenly activated on a host πŸ™‚

                                    1 Reply Last reply Reply Quote 0
                                    • furyflash777F Offline
                                      furyflash777 @olivierlambert
                                      last edited by

                                      @olivierlambert
                                      Hi,

                                      1. is it possible to create more then one linstor SR in pool?

                                      I have this error:
                                      Error code: SR_BACKEND_FAILURE_5006
                                      Error parameters: , LINSTOR SR creation error [opterr=LINSTOR SR must be unique in a pool],

                                      1. Also, is it possible to have hybrid volume ssd+hdd (cache/autotiring) ?
                                      ronan-aR 1 Reply Last reply Reply Quote 0
                                      • ronan-aR Offline
                                        ronan-a Vates πŸͺ XCP-ng Team @furyflash777
                                        last edited by

                                        @furyflash777 You can only use one LINSTOR SR per pool due to a limitation to share the DRBD volume database (and to start the LINSTOR controller). Why do you want many SRs?

                                        furyflash777F 1 Reply Last reply Reply Quote 0
                                        • furyflash777F Offline
                                          furyflash777 @ronan-a
                                          last edited by

                                          @ronan-a

                                          hosts previous was used with VMware VSAN,
                                          I have nvme and HDD disks in the same hosts.

                                          I am new in LINSTOR/DRBD/XOSTOR.

                                          What will be better, use LVM on top of mdm software raid or not?

                                          If a disk fails on one of the servers, will there be an automatic failover to the remaining replica?
                                          Will there be a break in this case?
                                          Will it start writing a new replica to the remaining host?

                                          ronan-aR 1 Reply Last reply Reply Quote 0
                                          • ronan-aR Offline
                                            ronan-a Vates πŸͺ XCP-ng Team @furyflash777
                                            last edited by

                                            @furyflash777 I don't recommend mixing HDDs and SSDs with our driver.

                                            Because the replication is managed by LINSTOR, try to not use RAID. A LINSTOR replication of 3 is robust, so if you have many disks on each machine, you can aggregate them into a linear volume.

                                            If a disk fails on one of the servers, will there be an automatic failover to the remaining replica?

                                            As long as there is no split brain, or data corruption, you can automatically use a copy of a VDI. πŸ˜‰

                                            Will it start writing a new replica to the remaining host?

                                            It depends on the used policy but yes it is possible.

                                            1 Reply Last reply Reply Quote 0
                                            • First post
                                              Last post