XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    XOSTOR hyperconvergence preview

    Scheduled Pinned Locked Moved XOSTOR
    465 Posts 51 Posters 834.0k Views 54 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • ronan-aR Offline
      ronan-a Vates 🪐 XCP-ng Team @Maelstrom96
      last edited by

      @Maelstrom96 said in XOSTOR hyperconvergence preview:

      The folder /dev/drbd/by-res/ doesn't exist currently.

      You're lucky, I just produced a fix yesterday to fix this kind of problem on pools with more than 3 machines: https://github.com/xcp-ng/sm/commit/f916647f44223206b24cf70d099637882c53fee8

      Unfortunately, I can't release a new version right away, but I think this change can be applied to your pool.
      In the worst case I'll see if I can release a new version without all the fixes in progress...

      0 Wescoeur committed to xcp-ng/sm
      fix(LinstorSR): ensure vhdutil calls are correctly executed on pools with > 3 hosts
      
      Signed-off-by: Ronan Abhamon <ronan.abhamon@vates.fr>
      Maelstrom96M 2 Replies Last reply Reply Quote 0
      • Maelstrom96M Offline
        Maelstrom96 @ronan-a
        last edited by Maelstrom96

        @ronan-a said in XOSTOR hyperconvergence preview:

        You're lucky, I just produced a fix yesterday to fix this kind of problem on pools with more than 3 machines: https://github.com/xcp-ng/sm/commit/f916647f44223206b24cf70d099637882c53fee8

        Unfortunately, I can't release a new version right away, but I think this change can be applied to your pool.
        In the worst case I'll see if I can release a new version without all the fixes in progress...

        Thanks, that does look like it would fix the missing drbd/by-res/ volumes.

        Do you have an idea about the missing StoragePool for the new host that was added using linstor-manager.addHost? I've checked the code and it seems like it might just provision the SP on sr.create?

        Also, I'm not sure how feasible it would be for SM but having a nightly-style build process for those cases seems like it would be really useful for hotfix testing.

        1 Reply Last reply Reply Quote 0
        • drfillD Offline
          drfill
          last edited by drfill

          Hello guys,
          Awesome realisation, works like a charm! But security in low level. Anyone who wants break down disk cluster/HC storage does it. I installed and as I see Linstor Controller ports opened to whole world. Any solution have you to close external port (when management in global IP), and communicate through Storage Network?

          1 Reply Last reply Reply Quote 0
          • olivierlambertO Offline
            olivierlambert Vates 🪐 Co-Founder CEO
            last edited by

            Hmm in theory I would say it should only listen on the management network (or the storage network, but not everything)

            1 Reply Last reply Reply Quote 0
            • Maelstrom96M Offline
              Maelstrom96 @ronan-a
              last edited by Maelstrom96

              @ronan-a Any news on when the new version of linstor SM will be released? We're actually hard blocked by the behavior with 4 nodes right now so we can't move forward with a lot of other tests we want to do.

              We also worked on doing a custom build of linstor-controller and linstor-satellite to allow support of Centos 7 with it's lack of sedsid -w support, and we might want to see if we could get a satisfactory PR that could be merged into linstor-server master so that people using XCP-ng can also use linstor's built-in snapshot shipping. Since K8s linstor snapshotter uses that functionality to provide volume backups, it means that using K8s with linstor on XCP-ng is not really possible unless this is fixed.

              Would that be something that you guys could help us push to linstor?

              ronan-aR 1 Reply Last reply Reply Quote 0
              • ronan-aR Offline
                ronan-a Vates 🪐 XCP-ng Team @Maelstrom96
                last edited by

                @Maelstrom96

                Do you have an idea about the missing StoragePool for the new host that was added using linstor-manager.addHost? I've checked the code and it seems like it might just provision the SP on sr.create?

                If I remember correctly, this script is only here for adding the PBDs of the new host and configuring the services. If you want to add a new device, it is necessary to manually create a new VG LVM, and add it via a linstor command.

                Also, I'm not sure how feasible it would be for SM but having a nightly-style build process for those cases seems like it would be really useful for hotfix testing.

                The branch was in a bad state (many fixes to test, regressions, etc). I was able to clean all that up, it should be easier to do releases now.

                Any news on when the new version of linstor SM will be released?

                Today. 😉

                ronan-aR 1 Reply Last reply Reply Quote 0
                • ronan-aR Offline
                  ronan-a Vates 🪐 XCP-ng Team @ronan-a
                  last edited by ronan-a

                  WARNING: I just pushed new packages (they should be available in our repo in few minutes) and I made an important change in the driver which requires manual intervention.

                  minidrbdcluster is no longer used to start the controller, instead we use drbd-reactor which is more robust.
                  To update properly, you must:

                  1. Disable minidrbdcluster on each host: systemctl disable --now minidrbdcluster.
                  2. Install new LINSTOR packages using yum:
                  • blktap-3.37.4-1.0.1.0.linstor.1.xcpng8.2.x86_64.rpm
                  • xcp-ng-linstor-1.0-1.xcpng8.2.noarch.rpm
                  • xcp-ng-release-linstor-1.2-1.xcpng8.2.noarch.rpm
                  • http-nbd-transfer-1.2.0-1.xcpng8.2.x86_64.rpm
                  • sm-2.30.7-1.3.0.linstor.7.xcpng8.2.x86_64.rpm
                  1. On each host, edit /etc/drbd-reactor.d/sm-linstor.toml. (Note: it's probably necessary to create the folder /etc/drbd-reactor.d/ with mkdir.) And add this content:
                  [[promoter]]
                  
                  [promoter.resources.xcp-persistent-database]
                  start = [ "var-lib-linstor.service", "linstor-controller.service" ]
                  
                  1. After that you can manually start drbd-reactor on each machine: systemctl enable --now drbd-reactor.

                  You can reuse your SR again.

                  1 Reply Last reply Reply Quote 0
                  • SwenS Offline
                    Swen
                    last edited by

                    @ronan-a Just to be sure: IF you install it from scratch you can still use the installation instructions from the top of this thread, correct?

                    ronan-aR 1 Reply Last reply Reply Quote 0
                    • ronan-aR Offline
                      ronan-a Vates 🪐 XCP-ng Team @Swen
                      last edited by

                      @Swen Yes you can still use the installation script. I just changed a line to install the new blktap, so redownload it if necessary.

                      SwenS 1 Reply Last reply Reply Quote 0
                      • SwenS Offline
                        Swen @ronan-a
                        last edited by

                        @ronan-a perfect, thx! Is this the new release Olivier was talking about? Can you provide some information when to expect the first stable release?

                        F ronan-aR 2 Replies Last reply Reply Quote 0
                        • F Offline
                          fred974 @Swen
                          last edited by fred974

                          @ronan-a Just to be clear on what I have to do..

                          1. Diable minidrbdcluster on each host: systemctl disable --now minidrbdcluster no issue here...

                          2. Install new LINSTOR packages. How do we do that? Do we run the installer again by running:
                            wget https://gist.githubusercontent.com/Wescoeur/7bb568c0e09e796710b0ea966882fcac/raw/1707fbcfac22e662c2b80c14762f2c7d937e677c/gistfile1.txt -O install && chmod +x install
                            or
                            ./install update

                          Or do I simply installed the new RPM without running the installer?

                          wget --no-check-certificate blktap-3.37.4-1.0.1.0.linstor.1.xcpng8.2.x86_64.rpm
                          wget --no-check-certificate xcp-ng-linstor-1.0-1.xcpng8.2.noarch.rpm
                          wget --no-check-certificate xcp-ng-release-linstor-1.2-1.xcpng8.2.noarch.rpm
                          wget --no-check-certificate http-nbd-transfer-1.2.0-1.xcpng8.2.x86_64.rpm
                          wget --no-check-certificate sm-2.30.7-1.3.0.linstor.7.xcpng8.2.x86_64.rpm
                          yum install *.rpm
                          

                          Where do we get this files from? what is the URL?

                          1. On each host, edit /etc/drbd-reactor.d/sm-linstor.toml no problem here...

                          Can you please confirm which procedure to use for steps 2?

                          Thank you.

                          F 1 Reply Last reply Reply Quote 0
                          • F Offline
                            fred974 @fred974
                            last edited by

                            @ronan-a is that the correct URL?

                            https://koji.xcp-ng.org/kojifiles/packages/blktap/3.37.4/1.0.1.0.linstor.1.xcpng8.2/x86_64/blktap-3.37.4-1.0.1.0.linstor.1.xcpng8.2.x86_64.rpm

                            https://koji.xcp-ng.org/kojifiles/packages/xcp-ng-linstor/1.0/1.xcpng8.2/noarch/xcp-ng-linstor-1.0-1.xcpng8.2.noarch.rpm

                            https://koji.xcp-ng.org/kojifiles/packages/xcp-ng-release-linstor/1.2/1.xcpng8.2/noarch/xcp-ng-release-linstor-1.2-1.xcpng8.2.noarch.rpm

                            https://koji.xcp-ng.org/kojifiles/packages/http-nbd-transfer/1.2.0/1.xcpng8.2/x86_64/http-nbd-transfer-1.2.0-1.xcpng8.2.x86_64.rpm

                            https://koji.xcp-ng.org/kojifiles/packages/sm/2.30.7/1.3.0.linstor.7.xcpng8.2/x86_64/sm-2.30.7-1.3.0.linstor.7.xcpng8.2.x86_64.rpm

                            1 Reply Last reply Reply Quote 0
                            • ronan-aR Offline
                              ronan-a Vates 🪐 XCP-ng Team @Swen
                              last edited by ronan-a

                              @Swen said in XOSTOR hyperconvergence preview:

                              @ronan-a perfect, thx! Is this the new release Olivier was talking about? Can you provide some information when to expect the first stable release?

                              If we don't have a new critical bug, normally in few weeks.

                              @fred974

                              Or do I simply installed the new RPM without running the installer?

                              You can update the packages just using yum if you already have the xcp-ng-linstor yum repo config. There is no reason to download manually the packages from koji.

                              F SwenS 2 Replies Last reply Reply Quote 0
                              • F Offline
                                fred974 @ronan-a
                                last edited by

                                @ronan-a ok,
                                I managed to install using yum

                                yum install blktap.x86_64
                                yum install xcp-ng-linstor.noarch
                                yum install xcp-ng-release-linstor.noarch
                                yum install http-nbd-transfer.x86_64
                                

                                But I cannot find sm-2.30.7-1.3.0.linstor.7.xcpng8.2.x86_64.rpm.
                                Is it yum install sm-core-libs.noarch ?

                                ronan-aR F 2 Replies Last reply Reply Quote 0
                                • ronan-aR Offline
                                  ronan-a Vates 🪐 XCP-ng Team @fred974
                                  last edited by

                                  @fred974 What's the output of yum update sm?

                                  F 1 Reply Last reply Reply Quote 0
                                  • F Offline
                                    fred974 @fred974
                                    last edited by

                                    systemctl enable --now drbd-reactor

                                    Job for drbd-reactor.service failed because the control process exited with error code. See "systemctl status drbd-reactor.service" and "journalctl -xe" for details.
                                    

                                    systemctl status drbd-reactor.service

                                    [12:21 uk xostortmp]# systemctl status drbd-reactor.service
                                    * drbd-reactor.service - DRBD-Reactor Service
                                       Loaded: loaded (/usr/lib/systemd/system/drbd-reactor.service; enabled; vendor preset: disabled)
                                       Active: failed (Result: exit-code) since Thu 2023-03-23 12:12:33 GMT; 9min ago
                                         Docs: man:drbd-reactor
                                               man:drbd-reactorctl
                                               man:drbd-reactor.toml
                                      Process: 8201 ExecStart=/usr/sbin/drbd-reactor (code=exited, status=1/FAILURE)
                                     Main PID: 8201 (code=exited, status=1/FAILURE)
                                    

                                    journalctl -xe has no usefull information but the SMlog log file has the following:

                                    Mar 23 12:29:27 uk SM: [17122] Raising exception [47, The SR is not available [opterr=Unable to find controller uri...]] Mar 23 12:29:27 uk SM: [17122] lock: released /var/lock/sm/a20ee08c-40d0-9818-084f-282bbca1f217/sr Mar 23 12:29:27 uk SM: [17122] ***** generic exception: sr_scan: EXCEPTION <class 'SR.SROSError'>, The SR is not available [opterr=Unable to find controller uri...] Mar 23 12:29:27 uk SM: [17122] File "/opt/xensource/sm/SRCommand.py", line 110, in run Mar 23 12:29:27 uk SM: [17122] return self._run_locked(sr) Mar 23 12:29:27 uk SM: [17122] File "/opt/xensource/sm/SRCommand.py", line 159, in _run_locked Mar 23 12:29:27 uk SM: [17122] rv = self._run(sr, target) Mar 23 12:29:27 uk SM: [17122] File "/opt/xensource/sm/SRCommand.py", line 364, in _run Mar 23 12:29:27 uk SM: [17122] return sr.scan(self.params['sr_uuid']) Mar 23 12:29:27 uk SM: [17122] File "/opt/xensource/sm/LinstorSR", line 634, in wrap Mar 23 12:29:27 uk SM: [17122] return load(self, *args, **kwargs) Mar 23 12:29:27 uk SM: [17122] File "/opt/xensource/sm/LinstorSR", line 560, in load Mar 23 12:29:27 uk SM: [17122] raise xs_errors.XenError('SRUnavailable', opterr=str(e)) Mar 23 12:29:27 uk SM: [17122] Mar 23 12:29:27 uk SM: [17122] ***** LINSTOR resources on XCP-ng: EXCEPTION <class 'SR.SROSError'>, The SR is not available [opterr=Unable to find controller uri...] Mar 23 12:29:27 uk SM: [17122] File "/opt/xensource/sm/SRCommand.py", line 378, in run Mar 23 12:29:27 uk SM: [17122] ret = cmd.run(sr) Mar 23 12:29:27 uk SM: [17122] File "/opt/xensource/sm/SRCommand.py", line 110, in run Mar 23 12:29:27 uk SM: [17122] return self._run_locked(sr) Mar 23 12:29:27 uk SM: [17122] File "/opt/xensource/sm/SRCommand.py", line 159, in _run_locked Mar 23 12:29:27 uk SM: [17122] rv = self._run(sr, target) Mar 23 12:29:27 uk SM: [17122] File "/opt/xensource/sm/SRCommand.py", line 364, in _run Mar 23 12:29:27 uk SM: [17122] return sr.scan(self.params['sr_uuid']) Mar 23 12:29:27 uk SM: [17122] File "/opt/xensource/sm/LinstorSR", line 634, in wrap Mar 23 12:29:27 uk SM: [17122] return load(self, *args, **kwargs) Mar 23 12:29:27 uk SM: [17122] File "/opt/xensource/sm/LinstorSR", line 560, in load Mar 23 12:29:27 uk SM: [17122] raise xs_errors.XenError('SRUnavailable', opterr=str(e))
                                    

                                    Is it normal that the XOSTOR SR is still visible in XO?
                                    9b88c891-78c3-4473-be5d-08e43c7d40ad-image.png

                                    1 Reply Last reply Reply Quote 0
                                    • F Offline
                                      fred974 @ronan-a
                                      last edited by

                                      @ronan-a said in XOSTOR hyperconvergence preview:

                                      What's the output of yum update sm?

                                      [12:33 uk ~]# yum update sm
                                      
                                      Loaded plugins: fastestmirror
                                      Loading mirror speeds from cached hostfile
                                      Excluding mirror: updates.xcp-ng.org
                                       * xcp-ng-base: mirrors.xcp-ng.org
                                      Excluding mirror: updates.xcp-ng.org
                                       * xcp-ng-linstor: mirrors.xcp-ng.org
                                      Excluding mirror: updates.xcp-ng.org
                                       * xcp-ng-updates: mirrors.xcp-ng.org
                                      No packages marked for update
                                      
                                      ronan-aR 1 Reply Last reply Reply Quote 0
                                      • ronan-aR Offline
                                        ronan-a Vates 🪐 XCP-ng Team @fred974
                                        last edited by ronan-a

                                        @fred974 And rpm -qa | grep sm? Because the sm LINSTOR package update is in our repo. So I suppose you already installed it using koji URLs.

                                        F 1 Reply Last reply Reply Quote 0
                                        • F Offline
                                          fred974 @ronan-a
                                          last edited by

                                          @ronan-a said in XOSTOR hyperconvergence preview:

                                          @fred974 And sudo rpm -qa | grep sm? Because the sm LINSTOR package update is in our repo. So I suppose you already installed it using koji URLs.

                                          microsemi-smartpqi-1.2.10_025-2.xcpng8.2.x86_64
                                          smartmontools-6.5-1.el7.x86_64
                                          sm-rawhba-2.30.7-1.3.0.linstor.7.xcpng8.2.x86_64
                                          ssmtp-2.64-14.el7.x86_64
                                          sm-cli-0.23.0-7.xcpng8.2.x86_64
                                          sm-2.30.7-1.3.0.linstor.7.xcpng8.2.x86_64
                                          libsmbclient-4.10.16-15.el7_9.x86_64
                                          psmisc-22.20-15.el7.x86_64
                                          

                                          Yes, I installed it from the koji URLs before seeing your rely

                                          ronan-aR 1 Reply Last reply Reply Quote 0
                                          • ronan-aR Offline
                                            ronan-a Vates 🪐 XCP-ng Team @fred974
                                            last edited by

                                            @fred974 I just repaired your pool, there was a small error in the conf that I gave in my previous post.

                                            F 1 Reply Last reply Reply Quote 0

                                            Hello! It looks like you're interested in this conversation, but you don't have an account yet.

                                            Getting fed up of having to scroll through the same posts each visit? When you register for an account, you'll always come back to exactly where you were before, and choose to be notified of new replies (either via email, or push notification). You'll also be able to save bookmarks and upvote posts to show your appreciation to other community members.

                                            With your input, this post could be even better 💗

                                            Register Login
                                            • First post
                                              Last post