XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    XOSTOR hyperconvergence preview

    Scheduled Pinned Locked Moved XOSTOR
    446 Posts 47 Posters 481.5k Views 48 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • SwenS Offline
      Swen @ronan-a
      last edited by

      @ronan-a said in XOSTOR hyperconvergence preview:

      @Swen said in XOSTOR hyperconvergence preview:

      @ronan-a perfect, thx! Is this the new release Olivier was talking about? Can you provide some information when to expect the first stable release?

      If we don't have a new critical bug, normally in few weeks.

      Fingers crossed! 🙂

      1 Reply Last reply Reply Quote 0
      • SwenS Offline
        Swen
        last edited by

        @ronan-a: After doing the installation from scratch with new installed xcp-ng hosts, all uptodate, I need to do a repair (via xcp-ng center) of the SR after doing the xe sr-create, because the SR is in state: Broken and the pool-master is in state: Unplugged.

        I am not really sure waht xcp-ng center is doing when I click repair, but it works.

        I can reproduce this issue, it happens every installation.

        regards,
        Swen

        1 Reply Last reply Reply Quote 0
        • olivierlambertO Offline
          olivierlambert Vates 🪐 Co-Founder CEO
          last edited by

          I don't remember if sr-create is also pluging the PBD by default 🤔

          Repair is just a xe pbd-plug IIRC.

          SwenS 1 Reply Last reply Reply Quote 0
          • SwenS Offline
            Swen @olivierlambert
            last edited by

            @olivierlambert it looks like sr-create is doing it, because on all other nodes the SR is attached, only on pool-master (or maybe the node you do the sr-create from) the pdb-plug does not work.

            1 Reply Last reply Reply Quote 0
            • olivierlambertO Offline
              olivierlambert Vates 🪐 Co-Founder CEO
              last edited by

              What's the error message when you try to plug it?

              SwenS 1 Reply Last reply Reply Quote 0
              • SwenS Offline
                Swen @olivierlambert
                last edited by

                @olivierlambert I need to me more clear about this: When doing the sr-create for the linstor storage no error is shown, but the pbd will not be plugged at the pool-master. On every other host in the cluster it works automatically. After doing a pdb-plug for the pool-master the SR will be plugged. No error is shown at all.

                ronan-aR 2 Replies Last reply Reply Quote 0
                • olivierlambertO Offline
                  olivierlambert Vates 🪐 Co-Founder CEO
                  last edited by

                  Okay I see, thanks 🙂

                  1 Reply Last reply Reply Quote 0
                  • SwenS Offline
                    Swen
                    last edited by Swen

                    Is there an easy way to map the linstor resource volume to the virtual disk on xcp-ng? When doing linstor volume list I get a Resource name back from linstor like this:

                    xcp-volume-23d07d99-9990-4046-8e7d-020bd61c1883
                    

                    The last part looks like an uuid to me, but I am unable to find this uuid when using some xe commands.

                    ronan-aR 1 Reply Last reply Reply Quote 0
                    • SwenS Offline
                      Swen
                      last edited by Swen

                      @ronan-a: I am playing around with xcp-ng, linstor and Cloudstack. Sometimes when I create a new VM I run into this error: The VDI is not available
                      CS is trying it again after this error automatically and than it works and the new VM is starting. CS is using a template which is also on the linstor SR to create new VMs.
                      I attached the SMlog of the host.
                      SMlog.txt

                      ronan-aR 2 Replies Last reply Reply Quote 0
                      • ronan-aR Offline
                        ronan-a Vates 🪐 XCP-ng Team @Swen
                        last edited by

                        @Swen After doing the installation from scratch with new installed xcp-ng hosts, all uptodate, I need to do a repair (via xcp-ng center) of the SR after doing the xe sr-create, because the SR is in state: Broken and the pool-master is in state: Unplugged.

                        I am not really sure waht xcp-ng center is doing when I click repair, but it works.

                        It's just a PBD plug call I suppose. Can you share your logs please?

                        1 Reply Last reply Reply Quote 0
                        • ronan-aR Offline
                          ronan-a Vates 🪐 XCP-ng Team @Swen
                          last edited by

                          @Swen said in XOSTOR hyperconvergence preview:

                          Is there an easy way to map the linstor resource volume to the virtual disk on xcp-ng? When doing linstor volume list I get a Resource name back from linstor like this:
                          xcp-volume-23d07d99-9990-4046-8e7d-020bd61c1883

                          The last part looks like an uuid to me, but I am unable to find this uuid when using some xe commands.

                          There is a tool installed by our RPMs to do that 😉
                          For example on my host:

                          linstor-kv-tool -u xostor-2 -g xcp-sr-linstor_group_thin_device --dump-volumes -n xcp/volume
                          {
                            "7ca7b184-ec9e-40bd-addc-082483f8e420/metadata": "{\"read_only\": false, \"snapshot_time\": \"\", \"vdi_type\": \"vhd\", \"snapshot_of\": \"\", \"name_label\": \"debian 11 hub disk\", \"name_description\": \"Created by XO\", \"type\": \"user\", \"metadata_of_pool\": \"\", \"is_a_snapshot\": false}", 
                            "7ca7b184-ec9e-40bd-addc-082483f8e420/not-exists": "0", 
                            "7ca7b184-ec9e-40bd-addc-082483f8e420/volume-name": "xcp-volume-12571cf9-1c3b-4ee9-8f93-f4d2f7ea6bd8"
                          }
                          
                          SwenS 2 Replies Last reply Reply Quote 0
                          • ronan-aR Offline
                            ronan-a Vates 🪐 XCP-ng Team @Swen
                            last edited by

                            @Swen said in XOSTOR hyperconvergence preview:

                            @ronan-a: I am playing around with xcp-ng, linstor and Cloudstack. Sometimes when I create a new VM I run into this error: The VDI is not available
                            CS is trying it again after this error automatically and than it works and the new VM is starting. CS is using a template which is also on the linstor SR to create new VMs.
                            I attached the SMlog of the host.
                            SMlog.txt

                            Can you share the log files of the other hosts please?

                            SwenS 1 Reply Last reply Reply Quote 0
                            • SwenS Offline
                              Swen @ronan-a
                              last edited by

                              @ronan-a sure, which logs exactly do you need?

                              ronan-aR 1 Reply Last reply Reply Quote 0
                              • SwenS Offline
                                Swen @ronan-a
                                last edited by

                                @ronan-a said in XOSTOR hyperconvergence preview:

                                There is a tool installed by our RPMs to do that 😉
                                For example on my host:

                                linstor-kv-tool -u xostor-2 -g xcp-sr-linstor_group_thin_device --dump-volumes -n xcp/volume
                                {
                                  "7ca7b184-ec9e-40bd-addc-082483f8e420/metadata": "{\"read_only\": false, \"snapshot_time\": \"\", \"vdi_type\": \"vhd\", \"snapshot_of\": \"\", \"name_label\": \"debian 11 hub disk\", \"name_description\": \"Created by XO\", \"type\": \"user\", \"metadata_of_pool\": \"\", \"is_a_snapshot\": false}", 
                                  "7ca7b184-ec9e-40bd-addc-082483f8e420/not-exists": "0", 
                                  "7ca7b184-ec9e-40bd-addc-082483f8e420/volume-name": "xcp-volume-12571cf9-1c3b-4ee9-8f93-f4d2f7ea6bd8"
                                }
                                

                                Great to know, thx for the info. Is there a reason not to use the same uuid in xcp-ng and linstor? Does it make sense to add the vdi and/or vbd uuid to the output of the command?

                                ronan-aR 1 Reply Last reply Reply Quote 0
                                • ronan-aR Offline
                                  ronan-a Vates 🪐 XCP-ng Team @Swen
                                  last edited by ronan-a

                                  @Swen said in XOSTOR hyperconvergence preview:

                                  @ronan-a sure, which logs exactly do you need?

                                  SMlog files of each host 😉

                                  SwenS 1 Reply Last reply Reply Quote 0
                                  • ronan-aR Offline
                                    ronan-a Vates 🪐 XCP-ng Team @Swen
                                    last edited by ronan-a

                                    @Swen said in XOSTOR hyperconvergence preview:

                                    Great to know, thx for the info. Is there a reason not to use the same uuid in xcp-ng and linstor? Does it make sense to add the vdi and/or vbd uuid to the output of the command?

                                    The main reason is that you cannot rename a LINSTOR resource once it has been created. And we need to be able to do this to implement the snapshot feature. To workaround that, a shared dictionary is used to map XAPI UUIDs to the LINSTOR resources.

                                    It's a non-sense for the readability to use the XAPI UUIDs for the LINSTOR resources due to VDI UUID renaming when a snapshot is created.

                                    I don't see a good reason to add the VDB UUIDs in the dictionary. You already have the VDIs, you can use xe commands to fetch the other infos.

                                    SwenS 1 Reply Last reply Reply Quote 0
                                    • SwenS Offline
                                      Swen @ronan-a
                                      last edited by

                                      @ronan-a sorry, was just unsure if you need more than SMlog files. 😉

                                      I will send you the log files via mail, because of the size.

                                      1 Reply Last reply Reply Quote 0
                                      • SwenS Offline
                                        Swen @ronan-a
                                        last edited by

                                        @ronan-a said in XOSTOR hyperconvergence preview:

                                        The main reason is that you cannot rename a LINSTOR resource once it has been created. And we need to be able to do this to implement the snapshot feature. To workaround that, a shared dictionary is used to map XAPI UUIDs to the LINSTOR resources.

                                        It's a non-sense for the readability to use the XAPI UUIDs for the LINSTOR resources due to VDI UUID renaming when a snapshot is created.

                                        I don't see a good reason to add the VDB UUIDs in the dictionary. You already have the VDIs, you can use xe commands to fetch the other infos.

                                        Ok, that makes sense. But what do you mean by "You already have the VDIs"? As far as I see the only mapping from linstor-kv-tool output to the disk on xcp-ng is the name_label, is that correct?

                                        ronan-aR 1 Reply Last reply Reply Quote 0
                                        • ronan-aR Offline
                                          ronan-a Vates 🪐 XCP-ng Team @Swen
                                          last edited by

                                          @Swen said in XOSTOR hyperconvergence preview:

                                          Ok, that makes sense. But what do you mean by "You already have the VDIs"? As far as I see the only mapping from linstor-kv-tool output to the disk on xcp-ng is the name_label, is that correct?

                                          No you have the VDI UUIDs:

                                          "7ca7b184-ec9e-40bd-addc-082483f8e420/volume-name": "xcp-volume-12571cf9-1c3b-4ee9-8f93-f4d2f7ea6bd8"
                                          

                                          The first UUID here is the VDI. 😉

                                          SwenS 1 Reply Last reply Reply Quote 0
                                          • SwenS Offline
                                            Swen @ronan-a
                                            last edited by

                                            @ronan-a sorry, I totally missed that info.

                                            1 Reply Last reply Reply Quote 0
                                            • First post
                                              Last post