XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    XOSTOR hyperconvergence preview

    Scheduled Pinned Locked Moved XOSTOR
    446 Posts 47 Posters 481.4k Views 48 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • B Offline
      BHellman 3rd party vendor
      last edited by

      I did those commands on xcp1 (pool master) and on the SR that was XOSTOR (linstor) and powered off xcp2. At that point the pool disappeared.

      Now I'm getting the following on the xcp servers console:

      Broadcast message from systemd-journald@xcp3 (Thu 2024-02-08 14:03:12 EST):
      
      xapi-nbd[5580]: main: Failed to log in via xapi's Unix domain socket in 300.000000 seconds
      
      
      Broadcast message from systemd-journald@xcp3 (Thu 2024-02-08 14:03:12 EST):
      
      xapi-nbd[5580]: main: Caught unexpected exception: (Failure
      
      
      Broadcast message from systemd-journald@xcp3 (Thu 2024-02-08 14:03:12 EST):
      
      xapi-nbd[5580]: main:   "Failed to log in via xapi's Unix domain socket in 300.000000 seconds")
      
      

      After powering up xcp2 the pool never comes back in the XOA interface.

      I'm seeing this on
      xcp1:

      [14:04 xcp1 ~]# drbdadm status
      xcp-persistent-database role:Secondary
        disk:Diskless quorum:no
        xcp2 connection:Connecting
        xcp3 connection:Connecting
      
      

      xcp2 and 3

      [14:10 xcp2 ~]# drbdadm status
      # No currently configured DRBD found.
      

      Seems like I hosed this thing up really good. I assume this broke because XOSTOR isn't a shared disk technically.

      [14:15 xcp1 /]# xe sr-list
      The server could not join the liveset because the HA daemon could not access the heartbeat disk.
      

      Is HA + XOSTOR something that should work?

      M olivierlambertO 2 Replies Last reply Reply Quote 0
      • J Offline
        Jonathon
        last edited by Jonathon

        Hello!

        I am attempting to update our hosts, starting with the pool controller. But I am getting a message that I wanted to ask about.

        The following happens when I attempt a yum update

        --> Processing Dependency: sm-linstor for package: xcp-ng-linstor-1.1-3.xcpng8.2.noarch
        --> Finished Dependency Resolution
        Error: Package: xcp-ng-linstor-1.1-3.xcpng8.2.noarch (xcp-ng-updates)
                   Requires: sm-linstor
        You could try using --skip-broken to work around the problem
                   You could try running: rpm -Va --nofiles --nodigest
        

        Only reference I am finding is here: https://koji.xcp-ng.org/buildinfo?buildID=3044
        My best guess is I need to do two updates, the first one skip broken. But wanted to ask to be sure as to not put things in a weird state.

        Thanks in advance!

        stormiS 2 Replies Last reply Reply Quote 0
        • M Offline
          Midget @BHellman
          last edited by

          @BHellman I have the EXACT same errors and scrolling logs now. I made a thread here...

          1 Reply Last reply Reply Quote 0
          • olivierlambertO Offline
            olivierlambert Vates 🪐 Co-Founder CEO @BHellman
            last edited by

            @BHellman Yes it should. @ronan-a will take a look around when he can 🙂

            1 Reply Last reply Reply Quote 0
            • stormiS Offline
              stormi Vates 🪐 XCP-ng Team @Jonathon
              last edited by

              @Jonathon Never use --skip-broken.

              1 Reply Last reply Reply Quote 0
              • stormiS Offline
                stormi Vates 🪐 XCP-ng Team @Jonathon
                last edited by

                @Jonathon What's the output of yum repolist?

                J 1 Reply Last reply Reply Quote 0
                • J Offline
                  Jonathon @stormi
                  last edited by

                  @stormi said in XOSTOR hyperconvergence preview:

                  yum repolist

                  lol glad I checked then

                  # yum repolist
                  Loaded plugins: fastestmirror
                  Loading mirror speeds from cached hostfile
                  Excluding mirror: updates.xcp-ng.org
                   * xcp-ng-base: mirrors.xcp-ng.org
                  Excluding mirror: updates.xcp-ng.org
                   * xcp-ng-linstor: mirrors.xcp-ng.org
                  Excluding mirror: updates.xcp-ng.org
                   * xcp-ng-updates: mirrors.xcp-ng.org
                  repo id                                                                                                                        repo name                                                                                                                                            status
                  !xcp-ng-base                                                                                                                   XCP-ng Base Repository                                                                                                                               2,161
                  !xcp-ng-linstor                                                                                                                XCP-ng LINSTOR Repository                                                                                                                              142
                  !xcp-ng-updates                                                                                                                XCP-ng Updates Repository                                                                                                                            1,408
                  !zabbix/x86_64                                                                                                                 Zabbix Official Repository - x86_64                                                                                                                     79
                  !zabbix-non-supported/x86_64                                                                                                   Zabbix Official Repository non-supported - x86_64                                                                                                        6
                  repolist: 3,796
                  
                  J 1 Reply Last reply Reply Quote 0
                  • V Offline
                    vaewyn
                    last edited by

                    Are there any rough estimates for timeline on paid support being available? Looking at ditching vmware and my company requires professional support availability. Virtualization I see the availability but I need storage as well that is at least mostly in parity with the vsan I have. Thanks to you all! Love these projects!

                    1 Reply Last reply Reply Quote 0
                    • olivierlambertO Offline
                      olivierlambert Vates 🪐 Co-Founder CEO
                      last edited by

                      We are working at full speed to get it available ASAP. There's still some bugs to fix and LINBIT is working on it.

                      1 Reply Last reply Reply Quote 1
                      • V Offline
                        vaewyn
                        last edited by

                        With the integration you are doing is there provision to designate racks/sites/datacenters/etc so at some level replications can be kept off hosts in the same physical risk space(s)?

                        1 Reply Last reply Reply Quote 0
                        • olivierlambertO Offline
                          olivierlambert Vates 🪐 Co-Founder CEO
                          last edited by

                          XOSTOR works at the pool level. You can have all your hosts in the pool, or only some of them participating to the HCI (eg 4 hosts with disks used for HCI and others just consuming it). Obviously, it means some hosts without the disks will have to read and write "remotely" on the hosts with the disks. But it might be perfectly acceptable 🙂

                          V 1 Reply Last reply Reply Quote 0
                          • V Offline
                            vaewyn @olivierlambert
                            last edited by

                            @olivierlambert I've understood that part... what I am wondering is if I have 3 hosts in one data center and 3 hosts in another, and I have asked for redundancy of 3 copies, is there a way to ensure all three copies are never in the same data center all at the same time.

                            1 Reply Last reply Reply Quote 1
                            • olivierlambertO Offline
                              olivierlambert Vates 🪐 Co-Founder CEO
                              last edited by

                              So I imagine a very low latency between the 2 DCs? One pool with 6 hosts total and 3 per DC right?

                              For now, there's no placement preference, we need to discuss with LINBIT about topology.

                              And if the 2x DCs are far each other, I would advice to get 2x pools and use 2x XOSTOR total

                              V B 2 Replies Last reply Reply Quote 0
                              • V Offline
                                vaewyn @olivierlambert
                                last edited by

                                @olivierlambert Correct... these DCs are across a campus on private fiber so single digit milliseconds worst case. We've historically had vmware keep 3 data copies and make sure at least one is in a separate DC... that way, when a DC is lost, the HA VMs can restart on the remaining host pool successfully because they have their storage available still.

                                1 Reply Last reply Reply Quote 0
                                • olivierlambertO Offline
                                  olivierlambert Vates 🪐 Co-Founder CEO
                                  last edited by

                                  So you can create a pool across the 2x DCs no problem. We'll take a deeper look on telling where to replicate to avoid having everything on the same place.

                                  1 Reply Last reply Reply Quote 0
                                  • B Offline
                                    BHellman 3rd party vendor @olivierlambert
                                    last edited by

                                    @olivierlambert said in XOSTOR hyperconvergence preview:

                                    So I imagine a very low latency between the 2 DCs? One pool with 6 hosts total and 3 per DC right?

                                    For now, there's no placement preference, we need to discuss with LINBIT about topology.

                                    And if the 2x DCs are far each other, I would advice to get 2x pools and use 2x XOSTOR total

                                    This can be done using placement policies as outlined in the LINSTOR users guide. It will probably require a bit of extra work on XO to use those properties

                                    1 Reply Last reply Reply Quote 0
                                    • olivierlambertO Offline
                                      olivierlambert Vates 🪐 Co-Founder CEO
                                      last edited by

                                      Technically, you could use manual CLI call to do it until we expose it in XO 🙂

                                      1 Reply Last reply Reply Quote 2
                                      • V Offline
                                        vaewyn
                                        last edited by

                                        For those that might run across my questions here... there is a nice blog post at Linbit on how to span availability zones correctly to keep your data redundancy up:
                                        https://linbit.com/blog/multi-az-replication-using-automatic-placement-rules-in-linstor/

                                        So CLI is doable 🙂 GUI would be nice in the future 😁

                                        1 Reply Last reply Reply Quote 0
                                        • J Offline
                                          Jonathon @Jonathon
                                          last edited by

                                          @Jonathon said in XOSTOR hyperconvergence preview:

                                          @stormi said in XOSTOR hyperconvergence preview:

                                          yum repolist

                                          lol glad I checked then

                                          # yum repolist
                                          Loaded plugins: fastestmirror
                                          Loading mirror speeds from cached hostfile
                                          Excluding mirror: updates.xcp-ng.org
                                           * xcp-ng-base: mirrors.xcp-ng.org
                                          Excluding mirror: updates.xcp-ng.org
                                           * xcp-ng-linstor: mirrors.xcp-ng.org
                                          Excluding mirror: updates.xcp-ng.org
                                           * xcp-ng-updates: mirrors.xcp-ng.org
                                          repo id                                                                                                                        repo name                                                                                                                                            status
                                          !xcp-ng-base                                                                                                                   XCP-ng Base Repository                                                                                                                               2,161
                                          !xcp-ng-linstor                                                                                                                XCP-ng LINSTOR Repository                                                                                                                              142
                                          !xcp-ng-updates                                                                                                                XCP-ng Updates Repository                                                                                                                            1,408
                                          !zabbix/x86_64                                                                                                                 Zabbix Official Repository - x86_64                                                                                                                     79
                                          !zabbix-non-supported/x86_64                                                                                                   Zabbix Official Repository non-supported - x86_64                                                                                                        6
                                          repolist: 3,796
                                          

                                          I was wondering if anyone had any insight into this? Unable to upgrade xen hosts.

                                          stormiS 1 Reply Last reply Reply Quote 0
                                          • stormiS Offline
                                            stormi Vates 🪐 XCP-ng Team @Jonathon
                                            last edited by

                                            @Jonathon I see no issue in this output, except that you enabled zabbix mirrors which might or might not play a role in your update issue.

                                            stormiS 1 Reply Last reply Reply Quote 0
                                            • First post
                                              Last post