XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    XOSTOR hyperconvergence preview

    Scheduled Pinned Locked Moved XOSTOR
    446 Posts 47 Posters 481.5k Views 48 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • F Offline
      fred974 @fred974
      last edited by

      systemctl enable --now drbd-reactor

      Job for drbd-reactor.service failed because the control process exited with error code. See "systemctl status drbd-reactor.service" and "journalctl -xe" for details.
      

      systemctl status drbd-reactor.service

      [12:21 uk xostortmp]# systemctl status drbd-reactor.service
      * drbd-reactor.service - DRBD-Reactor Service
         Loaded: loaded (/usr/lib/systemd/system/drbd-reactor.service; enabled; vendor preset: disabled)
         Active: failed (Result: exit-code) since Thu 2023-03-23 12:12:33 GMT; 9min ago
           Docs: man:drbd-reactor
                 man:drbd-reactorctl
                 man:drbd-reactor.toml
        Process: 8201 ExecStart=/usr/sbin/drbd-reactor (code=exited, status=1/FAILURE)
       Main PID: 8201 (code=exited, status=1/FAILURE)
      

      journalctl -xe has no usefull information but the SMlog log file has the following:

      Mar 23 12:29:27 uk SM: [17122] Raising exception [47, The SR is not available [opterr=Unable to find controller uri...]] Mar 23 12:29:27 uk SM: [17122] lock: released /var/lock/sm/a20ee08c-40d0-9818-084f-282bbca1f217/sr Mar 23 12:29:27 uk SM: [17122] ***** generic exception: sr_scan: EXCEPTION <class 'SR.SROSError'>, The SR is not available [opterr=Unable to find controller uri...] Mar 23 12:29:27 uk SM: [17122] File "/opt/xensource/sm/SRCommand.py", line 110, in run Mar 23 12:29:27 uk SM: [17122] return self._run_locked(sr) Mar 23 12:29:27 uk SM: [17122] File "/opt/xensource/sm/SRCommand.py", line 159, in _run_locked Mar 23 12:29:27 uk SM: [17122] rv = self._run(sr, target) Mar 23 12:29:27 uk SM: [17122] File "/opt/xensource/sm/SRCommand.py", line 364, in _run Mar 23 12:29:27 uk SM: [17122] return sr.scan(self.params['sr_uuid']) Mar 23 12:29:27 uk SM: [17122] File "/opt/xensource/sm/LinstorSR", line 634, in wrap Mar 23 12:29:27 uk SM: [17122] return load(self, *args, **kwargs) Mar 23 12:29:27 uk SM: [17122] File "/opt/xensource/sm/LinstorSR", line 560, in load Mar 23 12:29:27 uk SM: [17122] raise xs_errors.XenError('SRUnavailable', opterr=str(e)) Mar 23 12:29:27 uk SM: [17122] Mar 23 12:29:27 uk SM: [17122] ***** LINSTOR resources on XCP-ng: EXCEPTION <class 'SR.SROSError'>, The SR is not available [opterr=Unable to find controller uri...] Mar 23 12:29:27 uk SM: [17122] File "/opt/xensource/sm/SRCommand.py", line 378, in run Mar 23 12:29:27 uk SM: [17122] ret = cmd.run(sr) Mar 23 12:29:27 uk SM: [17122] File "/opt/xensource/sm/SRCommand.py", line 110, in run Mar 23 12:29:27 uk SM: [17122] return self._run_locked(sr) Mar 23 12:29:27 uk SM: [17122] File "/opt/xensource/sm/SRCommand.py", line 159, in _run_locked Mar 23 12:29:27 uk SM: [17122] rv = self._run(sr, target) Mar 23 12:29:27 uk SM: [17122] File "/opt/xensource/sm/SRCommand.py", line 364, in _run Mar 23 12:29:27 uk SM: [17122] return sr.scan(self.params['sr_uuid']) Mar 23 12:29:27 uk SM: [17122] File "/opt/xensource/sm/LinstorSR", line 634, in wrap Mar 23 12:29:27 uk SM: [17122] return load(self, *args, **kwargs) Mar 23 12:29:27 uk SM: [17122] File "/opt/xensource/sm/LinstorSR", line 560, in load Mar 23 12:29:27 uk SM: [17122] raise xs_errors.XenError('SRUnavailable', opterr=str(e))
      

      Is it normal that the XOSTOR SR is still visible in XO?
      9b88c891-78c3-4473-be5d-08e43c7d40ad-image.png

      1 Reply Last reply Reply Quote 0
      • F Offline
        fred974 @ronan-a
        last edited by

        @ronan-a said in XOSTOR hyperconvergence preview:

        What's the output of yum update sm?

        [12:33 uk ~]# yum update sm
        
        Loaded plugins: fastestmirror
        Loading mirror speeds from cached hostfile
        Excluding mirror: updates.xcp-ng.org
         * xcp-ng-base: mirrors.xcp-ng.org
        Excluding mirror: updates.xcp-ng.org
         * xcp-ng-linstor: mirrors.xcp-ng.org
        Excluding mirror: updates.xcp-ng.org
         * xcp-ng-updates: mirrors.xcp-ng.org
        No packages marked for update
        
        ronan-aR 1 Reply Last reply Reply Quote 0
        • ronan-aR Offline
          ronan-a Vates 🪐 XCP-ng Team @fred974
          last edited by ronan-a

          @fred974 And rpm -qa | grep sm? Because the sm LINSTOR package update is in our repo. So I suppose you already installed it using koji URLs.

          F 1 Reply Last reply Reply Quote 0
          • F Offline
            fred974 @ronan-a
            last edited by

            @ronan-a said in XOSTOR hyperconvergence preview:

            @fred974 And sudo rpm -qa | grep sm? Because the sm LINSTOR package update is in our repo. So I suppose you already installed it using koji URLs.

            microsemi-smartpqi-1.2.10_025-2.xcpng8.2.x86_64
            smartmontools-6.5-1.el7.x86_64
            sm-rawhba-2.30.7-1.3.0.linstor.7.xcpng8.2.x86_64
            ssmtp-2.64-14.el7.x86_64
            sm-cli-0.23.0-7.xcpng8.2.x86_64
            sm-2.30.7-1.3.0.linstor.7.xcpng8.2.x86_64
            libsmbclient-4.10.16-15.el7_9.x86_64
            psmisc-22.20-15.el7.x86_64
            

            Yes, I installed it from the koji URLs before seeing your rely

            ronan-aR 1 Reply Last reply Reply Quote 0
            • ronan-aR Offline
              ronan-a Vates 🪐 XCP-ng Team @fred974
              last edited by

              @fred974 I just repaired your pool, there was a small error in the conf that I gave in my previous post.

              F 1 Reply Last reply Reply Quote 0
              • F Offline
                fred974 @ronan-a
                last edited by

                @ronan-a said in XOSTOR hyperconvergence preview:

                I just repaired your pool, there was a small error in the conf that I gave in my previous post.

                Thank you very much. I really appreciate you fixing this for me 🙂

                1 Reply Last reply Reply Quote 1
                • SwenS Offline
                  Swen @ronan-a
                  last edited by

                  @ronan-a said in XOSTOR hyperconvergence preview:

                  @Swen said in XOSTOR hyperconvergence preview:

                  @ronan-a perfect, thx! Is this the new release Olivier was talking about? Can you provide some information when to expect the first stable release?

                  If we don't have a new critical bug, normally in few weeks.

                  Fingers crossed! 🙂

                  1 Reply Last reply Reply Quote 0
                  • SwenS Offline
                    Swen
                    last edited by

                    @ronan-a: After doing the installation from scratch with new installed xcp-ng hosts, all uptodate, I need to do a repair (via xcp-ng center) of the SR after doing the xe sr-create, because the SR is in state: Broken and the pool-master is in state: Unplugged.

                    I am not really sure waht xcp-ng center is doing when I click repair, but it works.

                    I can reproduce this issue, it happens every installation.

                    regards,
                    Swen

                    1 Reply Last reply Reply Quote 0
                    • olivierlambertO Offline
                      olivierlambert Vates 🪐 Co-Founder CEO
                      last edited by

                      I don't remember if sr-create is also pluging the PBD by default 🤔

                      Repair is just a xe pbd-plug IIRC.

                      SwenS 1 Reply Last reply Reply Quote 0
                      • SwenS Offline
                        Swen @olivierlambert
                        last edited by

                        @olivierlambert it looks like sr-create is doing it, because on all other nodes the SR is attached, only on pool-master (or maybe the node you do the sr-create from) the pdb-plug does not work.

                        1 Reply Last reply Reply Quote 0
                        • olivierlambertO Offline
                          olivierlambert Vates 🪐 Co-Founder CEO
                          last edited by

                          What's the error message when you try to plug it?

                          SwenS 1 Reply Last reply Reply Quote 0
                          • SwenS Offline
                            Swen @olivierlambert
                            last edited by

                            @olivierlambert I need to me more clear about this: When doing the sr-create for the linstor storage no error is shown, but the pbd will not be plugged at the pool-master. On every other host in the cluster it works automatically. After doing a pdb-plug for the pool-master the SR will be plugged. No error is shown at all.

                            ronan-aR 2 Replies Last reply Reply Quote 0
                            • olivierlambertO Offline
                              olivierlambert Vates 🪐 Co-Founder CEO
                              last edited by

                              Okay I see, thanks 🙂

                              1 Reply Last reply Reply Quote 0
                              • SwenS Offline
                                Swen
                                last edited by Swen

                                Is there an easy way to map the linstor resource volume to the virtual disk on xcp-ng? When doing linstor volume list I get a Resource name back from linstor like this:

                                xcp-volume-23d07d99-9990-4046-8e7d-020bd61c1883
                                

                                The last part looks like an uuid to me, but I am unable to find this uuid when using some xe commands.

                                ronan-aR 1 Reply Last reply Reply Quote 0
                                • SwenS Offline
                                  Swen
                                  last edited by Swen

                                  @ronan-a: I am playing around with xcp-ng, linstor and Cloudstack. Sometimes when I create a new VM I run into this error: The VDI is not available
                                  CS is trying it again after this error automatically and than it works and the new VM is starting. CS is using a template which is also on the linstor SR to create new VMs.
                                  I attached the SMlog of the host.
                                  SMlog.txt

                                  ronan-aR 2 Replies Last reply Reply Quote 0
                                  • ronan-aR Offline
                                    ronan-a Vates 🪐 XCP-ng Team @Swen
                                    last edited by

                                    @Swen After doing the installation from scratch with new installed xcp-ng hosts, all uptodate, I need to do a repair (via xcp-ng center) of the SR after doing the xe sr-create, because the SR is in state: Broken and the pool-master is in state: Unplugged.

                                    I am not really sure waht xcp-ng center is doing when I click repair, but it works.

                                    It's just a PBD plug call I suppose. Can you share your logs please?

                                    1 Reply Last reply Reply Quote 0
                                    • ronan-aR Offline
                                      ronan-a Vates 🪐 XCP-ng Team @Swen
                                      last edited by

                                      @Swen said in XOSTOR hyperconvergence preview:

                                      Is there an easy way to map the linstor resource volume to the virtual disk on xcp-ng? When doing linstor volume list I get a Resource name back from linstor like this:
                                      xcp-volume-23d07d99-9990-4046-8e7d-020bd61c1883

                                      The last part looks like an uuid to me, but I am unable to find this uuid when using some xe commands.

                                      There is a tool installed by our RPMs to do that 😉
                                      For example on my host:

                                      linstor-kv-tool -u xostor-2 -g xcp-sr-linstor_group_thin_device --dump-volumes -n xcp/volume
                                      {
                                        "7ca7b184-ec9e-40bd-addc-082483f8e420/metadata": "{\"read_only\": false, \"snapshot_time\": \"\", \"vdi_type\": \"vhd\", \"snapshot_of\": \"\", \"name_label\": \"debian 11 hub disk\", \"name_description\": \"Created by XO\", \"type\": \"user\", \"metadata_of_pool\": \"\", \"is_a_snapshot\": false}", 
                                        "7ca7b184-ec9e-40bd-addc-082483f8e420/not-exists": "0", 
                                        "7ca7b184-ec9e-40bd-addc-082483f8e420/volume-name": "xcp-volume-12571cf9-1c3b-4ee9-8f93-f4d2f7ea6bd8"
                                      }
                                      
                                      SwenS 2 Replies Last reply Reply Quote 0
                                      • ronan-aR Offline
                                        ronan-a Vates 🪐 XCP-ng Team @Swen
                                        last edited by

                                        @Swen said in XOSTOR hyperconvergence preview:

                                        @ronan-a: I am playing around with xcp-ng, linstor and Cloudstack. Sometimes when I create a new VM I run into this error: The VDI is not available
                                        CS is trying it again after this error automatically and than it works and the new VM is starting. CS is using a template which is also on the linstor SR to create new VMs.
                                        I attached the SMlog of the host.
                                        SMlog.txt

                                        Can you share the log files of the other hosts please?

                                        SwenS 1 Reply Last reply Reply Quote 0
                                        • SwenS Offline
                                          Swen @ronan-a
                                          last edited by

                                          @ronan-a sure, which logs exactly do you need?

                                          ronan-aR 1 Reply Last reply Reply Quote 0
                                          • SwenS Offline
                                            Swen @ronan-a
                                            last edited by

                                            @ronan-a said in XOSTOR hyperconvergence preview:

                                            There is a tool installed by our RPMs to do that 😉
                                            For example on my host:

                                            linstor-kv-tool -u xostor-2 -g xcp-sr-linstor_group_thin_device --dump-volumes -n xcp/volume
                                            {
                                              "7ca7b184-ec9e-40bd-addc-082483f8e420/metadata": "{\"read_only\": false, \"snapshot_time\": \"\", \"vdi_type\": \"vhd\", \"snapshot_of\": \"\", \"name_label\": \"debian 11 hub disk\", \"name_description\": \"Created by XO\", \"type\": \"user\", \"metadata_of_pool\": \"\", \"is_a_snapshot\": false}", 
                                              "7ca7b184-ec9e-40bd-addc-082483f8e420/not-exists": "0", 
                                              "7ca7b184-ec9e-40bd-addc-082483f8e420/volume-name": "xcp-volume-12571cf9-1c3b-4ee9-8f93-f4d2f7ea6bd8"
                                            }
                                            

                                            Great to know, thx for the info. Is there a reason not to use the same uuid in xcp-ng and linstor? Does it make sense to add the vdi and/or vbd uuid to the output of the command?

                                            ronan-aR 1 Reply Last reply Reply Quote 0
                                            • First post
                                              Last post