XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    Multiple Volumes Alternate Primary

    Scheduled Pinned Locked Moved XOSTOR
    8 Posts 3 Posters 493 Views 3 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • D Offline
      David
      last edited by

      This looks really cool, I'm looking forward to testing it. A quick question...

      I was wondering in a two node system if there is performance benefit to VMs hosted on a node where the DRBD volume is master and if so can I have multiple XOSTOR volumes using separate disks on each nodes e.g. linstor_group1 / linstor_group 2 where node1 is primary for linstor_group1 and node2 is primary for linstor_group2?

      In this case I could organise VM home servers according to where their DRDB master is.

      So I guess the questions are:

      1. Can I have multiple XOSTOR volumes
      2. Can I have alternate primaries

      Thanks

      D 1 Reply Last reply Reply Quote 0
      • D David marked this topic as a question on
      • D Offline
        David @David
        last edited by

        Set up a two node cluster, hv-01 & hv-02, created a VM.

        With the VM on hv-01

        hv-01 shows:

        xcp-volume-a5fd7961-af4c-47ed-a24b-99d432330107 role:Primary
          disk:UpToDate
          hv-02 role:Secondary
            peer-disk:UpToDate
        

        hv-02 shows:

        xcp-volume-a5fd7961-af4c-47ed-a24b-99d432330107 role:Secondary
          disk:UpToDate
          hv-01 role:Primary
            peer-disk:UpToDate
        

        Migrated VM to hv-02

        hv-01 shows:

        xcp-volume-a5fd7961-af4c-47ed-a24b-99d432330107 role:Secondary
          disk:UpToDate
          hv-02 role:Primary
            peer-disk:UpToDate
        

        hv-02 shows:

        xcp-volume-a5fd7961-af4c-47ed-a24b-99d432330107 role:Primary
          disk:UpToDate
          hv-01 role:Secondary
            peer-disk:UpToDate
        

        It looks like the VMs disk is automagically assigned primary to it's local node, mind blown! Am I reading this right?

        ronan-aR 1 Reply Last reply Reply Quote 0
        • D David marked this topic as a regular topic on
        • olivierlambertO Offline
          olivierlambert Vates 🪐 Co-Founder CEO
          last edited by

          Question for @ronan-a

          1 Reply Last reply Reply Quote 0
          • ronan-aR Offline
            ronan-a Vates 🪐 XCP-ng Team @David
            last edited by

            @David I'm not sure to totally understand your questions. A DRBD "master" (so a primary) is never fixed, it's just a state at a given time.
            When a DRBD passes primary it is because it's opened by a process on a machine and nothing more. In the case of XCP-ng, the volume can be opened by a tapdisk instance today, and the next day opened on another host. However, regarding performance, there is a feature not used at the moment to reduce network usage to force diskful usage locally: https://linbit.com/drbd-user-guide/linstor-guide-1_0-en/#s-linstor-auto-diskful

            In this case I could organise VM home servers according to where their DRDB master is.

            Considering what I said, this is not a good idea. It would be more interesting to use the auto-diskful functionality but it becomes complex to use when VHD snapshots are used...
            For each snapshot, a diskful/diskless DRBD is created on any host, there is no use choice during the creation.

            It looks like the VMs disk is automagically assigned primary to it's local node, mind blown! Am I reading this right?

            In both logs, a5fd7961is open on the host the VM is running on, no surprise. 🙂

            D 1 Reply Last reply Reply Quote 0
            • D Offline
              David @ronan-a
              last edited by

              @ronan-a Thanks for taking the time to explain, it is appreciated. I will have a further read to help my understanding more.

              Do you know if it is possible to add multiple XOSTOR SRs so that I have the option of separating disk types e.g. one XOSTOR for NVMe SSD and one for SATA SSD?

              ronan-aR 1 Reply Last reply Reply Quote 0
              • ronan-aR Offline
                ronan-a Vates 🪐 XCP-ng Team @David
                last edited by

                @David For the moment, only one XOSTOR SR can be used in a pool. We currently have no plans to lift this limitation, at least not while we have tickets relating to bugs or performance issues. I think it will come one day.

                D 1 Reply Last reply Reply Quote 0
                • D Offline
                  David @ronan-a
                  last edited by

                  @ronan-a I think it would be a really useful option, hopefully we'll see it in the future. Thanks again for taking the time to answer my questions.

                  ronan-aR 1 Reply Last reply Reply Quote 0
                  • ronan-aR Offline
                    ronan-a Vates 🪐 XCP-ng Team @David
                    last edited by ronan-a

                    @David I think the complexity is to be able to offer a simple interface / API way for users to configure multiple storages. Maybe through smapi v3.
                    In any case we currently only support one storage pool, the sm driver would have to be reworked to support several. It also probably requires visibility from XOA's point of view. Lots of points to discuss. I will create a card on our internal Kanban.

                    Regarding multiple XOSTOR SRs, we must:

                    • Add a way to move the controller volume on a specific storage pool.
                    • Ensure the controller it is still accessible for remaining SRs despite the destruction of an SR.
                    • Use a lock mechanism to protect the LINSTOR env.
                    1 Reply Last reply Reply Quote 0
                    • First post
                      Last post