XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    Three-node Networking for XOSTOR

    Scheduled Pinned Locked Moved XOSTOR
    15 Posts 7 Posters 2.3k Views 7 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • T Offline
      T3CCH @olivierlambert
      last edited by

      @olivierlambert Is it possible to go switchless?

      I have 3 hosts that have 4 25gb connections on them. I bonded the adapters that go to each host in pairs.

      I have looked quite a bit for a setup and network guide to XOSTOR.

      Do you have a link?

      Thank you so much for your time

      Maelstrom96M 1 Reply Last reply Reply Quote 0
      • olivierlambertO Offline
        olivierlambert Vates 🪐 Co-Founder CEO
        last edited by

        Question for @ronan-a

        1 Reply Last reply Reply Quote 1
        • Maelstrom96M Offline
          Maelstrom96 @T3CCH
          last edited by

          @T3CCH What you might be looking for: https://xcp-ng.org/docs/networking.html#full-mesh-network

          1 Reply Last reply Reply Quote 1
          • H Offline
            ha_tu_su
            last edited by

            I am experimenting with xcp-ng as a viable option to replace my company's 3 node clusters (2 main nodes + 1 witness node) at customer sites. The hardware we are using has 2 10Gig NICs and rest are 1Gig.

            Since XOSTOR is based on LINSTOR I experimented with implementing LINSTOR HA cluster on 3 nodes on ProxMox first. Although mesh network was an option, I wanted to explore other options of implementing a storage network without a switch, since mesh network is only 'community-supported' at the moment. I ended up doing the following:

            1. Only 2 nodes contribute to the storage pool. 3rd node is just there to maintain quorum for hypervisor and for LINSTOR.
            2. All 3 nodes are connected to each other via a 1Gig Ethernet switch. This is the management network for the cluster.
            3. 2 main nodes are directly connected to each other via a 10Gig link. This is the storage network. Note that the witness node is not connected to storage network.
            4. Create a loopback interface on witness node which has an IP in the storage network subnet.
            5. Enable ipv_forward=1 on all 3 nodes.
            6. Add static routes to 2 main hosts like following:
              ip route add <witness_node_storage_int_ip>/32 dev <main_host_mgmt_int_name>
            7. Add static route to witness node like following:
              ip route add <main_node1_storage_int_ip>/32 dev <witness_host_mgmt_int_name>
              ip route add <main_node2_storage_int_ip>/32 dev <witness_host_mgmt_int_name>
            8. After this all 3 nodes can talk to each other on storage subnet. LINSTOR traffic to and from witness node will use the management network. Since this traffic would not be much, it will not hamper other traffic on management network.

            Now I want to do a similar implementation in xcp-ng and XOSTOR. Proxmox was basically a learning ground to iron out issues and get a grasp on LINSTOR concepts. So now the questions are:

            1. Is the above method doable on xcp-ng?
            2. Is it advisable to do 3 node storage network without switches this way?
            3. Any issues with enterprise support if someone does this?

            Thanks.

            4 H 3 Replies Last reply Reply Quote 1
            • 4 Offline
              456Q @ha_tu_su
              last edited by

              @ha_tu_su Hi, I come from a similar setup and did the following

              1. Create a pool with two physcial nodes
              2. Add third node which is virtual in my case AND not running on one of the two physical nodes
              3. Create XOSTOR with replication count 2. The virtual nodes will be marked as diskless by default.

              In XOSTOR the diskless node will be called "tie breaker". From my understanding very similar to the witness in vsan.

              You can also go ahead select a dedicated network for the XOSTOR. This can be done post create as well. But i'm not sure if this will work without a switch.

              H 2 Replies Last reply Reply Quote 0
              • H Offline
                ha_tu_su @456Q
                last edited by

                @456Q
                Your setup makes sense to me. Although I guess would have still required some physical hardware to run the virtual witness. Correct?

                I am pretty sure that mesh network will work for a 3 node setup (at least it makes sense to me). It's the support part that I wanted to get answer on since we will be deploying on customer sites which require official support contracts.

                I am planning to test the 'routing' method this week. If it works and will be supported by Vates then that is what we will go with.

                Thanks.

                1 Reply Last reply Reply Quote 0
                • H Offline
                  ha_tu_su @456Q
                  last edited by

                  @456Q
                  Also when you use replication count of 2 do you get redundant linstor-controllers which are managed by drbd-reactor?

                  I saw some posts with commands about drbd reactor in the forum so I am assuming it is used to manage redundant linstor-controllers.

                  When I experimented in proxmox I used only one linstor-controller because getting up drbd-reactor to install was tedious. I hope XOSTOR does this automatically.

                  1 Reply Last reply Reply Quote 0
                  • H Offline
                    ha_tu_su @ha_tu_su
                    last edited by

                    @ha_tu_su
                    Ok, I have setup everything as I said I would. The only thing I wasn't able to setup was a loopback adapter on witness node mentioned in step 4. I assigned the storage network IP on spare physical interface instead. This doesn't change the logic of how everything should work together.

                    Till now, I have created XOSTOR, using disks from 2 main nodes. I have enabled routing and added necessary static routes on all hosts, enabled HA on the pool, created a test VM and have tested migration of that VM between main nodes. I haven't yet tested HA by disconnecting network or powering off hosts. I am planning to do this testing in next 2 days.

                    @olivierlambert: Can you answer the question on the enterprise support for such topology? And do you see any technical pitfalls with this approach?

                    I admit I am fairly new to HA stuff related to virtualization, so any feedback from the community is appreciated, just to enlighten me.

                    Thanks.

                    1 Reply Last reply Reply Quote 0
                    • H Offline
                      ha_tu_su @ha_tu_su
                      last edited by

                      @ha_tu_su

                      I have executed above steps and currently my XOSTOR network looks like this:
                      13f63152-7a02-4e45-8d56-72101d07d756-image.png

                      When I set 'vsan' as my preferred NIC, I get below output on linstor-controller node:
                      db2454c5-fca6-4066-b45e-cef701fe10ff-image.png

                      Connectivity between all 3 nodes is present:
                      7f11f905-7562-4e90-82b2-974b1312ef7e-image.png

                      When I set 'default' as my preferred NIC, I get correct output on linstor-controller:
                      93af20da-02d5-42cc-b537-7bd48fb4380c-image.png

                      @ronan-a: Can you help out here?

                      Thanks.

                      H 1 Reply Last reply Reply Quote 0
                      • H Offline
                        ha_tu_su @ha_tu_su
                        last edited by

                        @ha_tu_su
                        @olivierlambert @ronan-a : Any insight into this?

                        ronan-aR 1 Reply Last reply Reply Quote 0
                        • ronan-aR Offline
                          ronan-a Vates 🪐 XCP-ng Team @ha_tu_su
                          last edited by

                          @ha_tu_su
                          Regarding your previous message, for an SR LINSTOR to be functional:

                          • Each node in the pool must have a PBD attached.
                          • A node may not have a local physical disk.
                          • ip route should not be used manually. LINSTOR has an API for using dedicated network interfaces.
                          • XOSTOR effectively supports configurations with hosts without disks and quorum can still be used.

                          Could you list the interfaces using linstor node interface list <NODE>?

                          H 1 Reply Last reply Reply Quote 0
                          • H Offline
                            ha_tu_su @ronan-a
                            last edited by

                            @ronan-a
                            Unfortunately, I am in the process of reinstalling XCP-ng on the nodes to start from scratch. Just thought I have tried too many things and somewhere forgot to undo the ‘wrong’ configs. So can’t run the command now. Although I had run this command before when I posted all the screenshots. The output had 2 entries (from my memory):

                            1. StltCon		<mgmt_ip>		3366		Plain
                            2. 		   <storage_nw_ip>	        3366		Plain	
                            

                            I will repost with the required data when I get everything configured again.

                            Thanks.

                            1 Reply Last reply Reply Quote 0
                            • First post
                              Last post