XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    XOSTOR hyperconvergence preview

    Scheduled Pinned Locked Moved XOSTOR
    465 Posts 51 Posters 834.6k Views 54 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • F Offline
      fred974 @ronan-a
      last edited by

      @ronan-a said in XOSTOR hyperconvergence preview:

      @fred974 And sudo rpm -qa | grep sm? Because the sm LINSTOR package update is in our repo. So I suppose you already installed it using koji URLs.

      microsemi-smartpqi-1.2.10_025-2.xcpng8.2.x86_64
      smartmontools-6.5-1.el7.x86_64
      sm-rawhba-2.30.7-1.3.0.linstor.7.xcpng8.2.x86_64
      ssmtp-2.64-14.el7.x86_64
      sm-cli-0.23.0-7.xcpng8.2.x86_64
      sm-2.30.7-1.3.0.linstor.7.xcpng8.2.x86_64
      libsmbclient-4.10.16-15.el7_9.x86_64
      psmisc-22.20-15.el7.x86_64
      

      Yes, I installed it from the koji URLs before seeing your rely

      ronan-aR 1 Reply Last reply Reply Quote 0
      • ronan-aR Online
        ronan-a Vates 🪐 XCP-ng Team @fred974
        last edited by

        @fred974 I just repaired your pool, there was a small error in the conf that I gave in my previous post.

        F 1 Reply Last reply Reply Quote 0
        • F Offline
          fred974 @ronan-a
          last edited by

          @ronan-a said in XOSTOR hyperconvergence preview:

          I just repaired your pool, there was a small error in the conf that I gave in my previous post.

          Thank you very much. I really appreciate you fixing this for me 🙂

          1 Reply Last reply Reply Quote 1
          • SwenS Offline
            Swen @ronan-a
            last edited by

            @ronan-a said in XOSTOR hyperconvergence preview:

            @Swen said in XOSTOR hyperconvergence preview:

            @ronan-a perfect, thx! Is this the new release Olivier was talking about? Can you provide some information when to expect the first stable release?

            If we don't have a new critical bug, normally in few weeks.

            Fingers crossed! 🙂

            1 Reply Last reply Reply Quote 0
            • SwenS Offline
              Swen
              last edited by

              @ronan-a: After doing the installation from scratch with new installed xcp-ng hosts, all uptodate, I need to do a repair (via xcp-ng center) of the SR after doing the xe sr-create, because the SR is in state: Broken and the pool-master is in state: Unplugged.

              I am not really sure waht xcp-ng center is doing when I click repair, but it works.

              I can reproduce this issue, it happens every installation.

              regards,
              Swen

              1 Reply Last reply Reply Quote 0
              • olivierlambertO Offline
                olivierlambert Vates 🪐 Co-Founder CEO
                last edited by

                I don't remember if sr-create is also pluging the PBD by default 🤔

                Repair is just a xe pbd-plug IIRC.

                SwenS 1 Reply Last reply Reply Quote 0
                • SwenS Offline
                  Swen @olivierlambert
                  last edited by

                  @olivierlambert it looks like sr-create is doing it, because on all other nodes the SR is attached, only on pool-master (or maybe the node you do the sr-create from) the pdb-plug does not work.

                  1 Reply Last reply Reply Quote 0
                  • olivierlambertO Offline
                    olivierlambert Vates 🪐 Co-Founder CEO
                    last edited by

                    What's the error message when you try to plug it?

                    SwenS 1 Reply Last reply Reply Quote 0
                    • SwenS Offline
                      Swen @olivierlambert
                      last edited by

                      @olivierlambert I need to me more clear about this: When doing the sr-create for the linstor storage no error is shown, but the pbd will not be plugged at the pool-master. On every other host in the cluster it works automatically. After doing a pdb-plug for the pool-master the SR will be plugged. No error is shown at all.

                      ronan-aR 2 Replies Last reply Reply Quote 0
                      • olivierlambertO Offline
                        olivierlambert Vates 🪐 Co-Founder CEO
                        last edited by

                        Okay I see, thanks 🙂

                        1 Reply Last reply Reply Quote 0
                        • SwenS Offline
                          Swen
                          last edited by Swen

                          Is there an easy way to map the linstor resource volume to the virtual disk on xcp-ng? When doing linstor volume list I get a Resource name back from linstor like this:

                          xcp-volume-23d07d99-9990-4046-8e7d-020bd61c1883
                          

                          The last part looks like an uuid to me, but I am unable to find this uuid when using some xe commands.

                          ronan-aR 1 Reply Last reply Reply Quote 0
                          • SwenS Offline
                            Swen
                            last edited by Swen

                            @ronan-a: I am playing around with xcp-ng, linstor and Cloudstack. Sometimes when I create a new VM I run into this error: The VDI is not available
                            CS is trying it again after this error automatically and than it works and the new VM is starting. CS is using a template which is also on the linstor SR to create new VMs.
                            I attached the SMlog of the host.
                            SMlog.txt

                            ronan-aR 2 Replies Last reply Reply Quote 0
                            • ronan-aR Online
                              ronan-a Vates 🪐 XCP-ng Team @Swen
                              last edited by

                              @Swen After doing the installation from scratch with new installed xcp-ng hosts, all uptodate, I need to do a repair (via xcp-ng center) of the SR after doing the xe sr-create, because the SR is in state: Broken and the pool-master is in state: Unplugged.

                              I am not really sure waht xcp-ng center is doing when I click repair, but it works.

                              It's just a PBD plug call I suppose. Can you share your logs please?

                              1 Reply Last reply Reply Quote 0
                              • ronan-aR Online
                                ronan-a Vates 🪐 XCP-ng Team @Swen
                                last edited by

                                @Swen said in XOSTOR hyperconvergence preview:

                                Is there an easy way to map the linstor resource volume to the virtual disk on xcp-ng? When doing linstor volume list I get a Resource name back from linstor like this:
                                xcp-volume-23d07d99-9990-4046-8e7d-020bd61c1883

                                The last part looks like an uuid to me, but I am unable to find this uuid when using some xe commands.

                                There is a tool installed by our RPMs to do that 😉
                                For example on my host:

                                linstor-kv-tool -u xostor-2 -g xcp-sr-linstor_group_thin_device --dump-volumes -n xcp/volume
                                {
                                  "7ca7b184-ec9e-40bd-addc-082483f8e420/metadata": "{\"read_only\": false, \"snapshot_time\": \"\", \"vdi_type\": \"vhd\", \"snapshot_of\": \"\", \"name_label\": \"debian 11 hub disk\", \"name_description\": \"Created by XO\", \"type\": \"user\", \"metadata_of_pool\": \"\", \"is_a_snapshot\": false}", 
                                  "7ca7b184-ec9e-40bd-addc-082483f8e420/not-exists": "0", 
                                  "7ca7b184-ec9e-40bd-addc-082483f8e420/volume-name": "xcp-volume-12571cf9-1c3b-4ee9-8f93-f4d2f7ea6bd8"
                                }
                                
                                SwenS 2 Replies Last reply Reply Quote 0
                                • ronan-aR Online
                                  ronan-a Vates 🪐 XCP-ng Team @Swen
                                  last edited by

                                  @Swen said in XOSTOR hyperconvergence preview:

                                  @ronan-a: I am playing around with xcp-ng, linstor and Cloudstack. Sometimes when I create a new VM I run into this error: The VDI is not available
                                  CS is trying it again after this error automatically and than it works and the new VM is starting. CS is using a template which is also on the linstor SR to create new VMs.
                                  I attached the SMlog of the host.
                                  SMlog.txt

                                  Can you share the log files of the other hosts please?

                                  SwenS 1 Reply Last reply Reply Quote 0
                                  • SwenS Offline
                                    Swen @ronan-a
                                    last edited by

                                    @ronan-a sure, which logs exactly do you need?

                                    ronan-aR 1 Reply Last reply Reply Quote 0
                                    • SwenS Offline
                                      Swen @ronan-a
                                      last edited by

                                      @ronan-a said in XOSTOR hyperconvergence preview:

                                      There is a tool installed by our RPMs to do that 😉
                                      For example on my host:

                                      linstor-kv-tool -u xostor-2 -g xcp-sr-linstor_group_thin_device --dump-volumes -n xcp/volume
                                      {
                                        "7ca7b184-ec9e-40bd-addc-082483f8e420/metadata": "{\"read_only\": false, \"snapshot_time\": \"\", \"vdi_type\": \"vhd\", \"snapshot_of\": \"\", \"name_label\": \"debian 11 hub disk\", \"name_description\": \"Created by XO\", \"type\": \"user\", \"metadata_of_pool\": \"\", \"is_a_snapshot\": false}", 
                                        "7ca7b184-ec9e-40bd-addc-082483f8e420/not-exists": "0", 
                                        "7ca7b184-ec9e-40bd-addc-082483f8e420/volume-name": "xcp-volume-12571cf9-1c3b-4ee9-8f93-f4d2f7ea6bd8"
                                      }
                                      

                                      Great to know, thx for the info. Is there a reason not to use the same uuid in xcp-ng and linstor? Does it make sense to add the vdi and/or vbd uuid to the output of the command?

                                      ronan-aR 1 Reply Last reply Reply Quote 0
                                      • ronan-aR Online
                                        ronan-a Vates 🪐 XCP-ng Team @Swen
                                        last edited by ronan-a

                                        @Swen said in XOSTOR hyperconvergence preview:

                                        @ronan-a sure, which logs exactly do you need?

                                        SMlog files of each host 😉

                                        SwenS 1 Reply Last reply Reply Quote 0
                                        • ronan-aR Online
                                          ronan-a Vates 🪐 XCP-ng Team @Swen
                                          last edited by ronan-a

                                          @Swen said in XOSTOR hyperconvergence preview:

                                          Great to know, thx for the info. Is there a reason not to use the same uuid in xcp-ng and linstor? Does it make sense to add the vdi and/or vbd uuid to the output of the command?

                                          The main reason is that you cannot rename a LINSTOR resource once it has been created. And we need to be able to do this to implement the snapshot feature. To workaround that, a shared dictionary is used to map XAPI UUIDs to the LINSTOR resources.

                                          It's a non-sense for the readability to use the XAPI UUIDs for the LINSTOR resources due to VDI UUID renaming when a snapshot is created.

                                          I don't see a good reason to add the VDB UUIDs in the dictionary. You already have the VDIs, you can use xe commands to fetch the other infos.

                                          SwenS 1 Reply Last reply Reply Quote 0
                                          • SwenS Offline
                                            Swen @ronan-a
                                            last edited by

                                            @ronan-a sorry, was just unsure if you need more than SMlog files. 😉

                                            I will send you the log files via mail, because of the size.

                                            1 Reply Last reply Reply Quote 0

                                            Hello! It looks like you're interested in this conversation, but you don't have an account yet.

                                            Getting fed up of having to scroll through the same posts each visit? When you register for an account, you'll always come back to exactly where you were before, and choose to be notified of new replies (either via email, or push notification). You'll also be able to save bookmarks and upvote posts to show your appreciation to other community members.

                                            With your input, this post could be even better 💗

                                            Register Login
                                            • First post
                                              Last post