XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    Epyc VM to VM networking slow

    Scheduled Pinned Locked Moved Compute
    206 Posts 23 Posters 101.4k Views 26 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • olivierlambertO Offline
      olivierlambert Vates 🪐 Co-Founder CEO
      last edited by olivierlambert

      IIRC, as long as the traffic is going via a physical NIC, the impact is greatly reduced. That's why it's better to check with XO outside the master itself to get the traffic going outside the host. That's because of the NIC offload work.

      J 1 Reply Last reply Reply Quote 0
      • J Offline
        john.c @olivierlambert
        last edited by john.c

        @olivierlambert said in Epyc VM to VM networking slow:

        IIRC, as long as the traffic is going via a physical NIC, the impact is greatly reduced. That's why it's better to check with XO outside the master itself to get the traffic going outside the host. That's because of the NIC offload work.

        @Seneram What olivierlambert is saying is to have the XO/XOA on another system which the pool connects to, but outside outside of other pools.

        M 1 Reply Last reply Reply Quote 0
        • M Offline
          manilx @john.c
          last edited by

          @john-c You mean on another host not belonging to the "EPYC" pool? Could try that, I have XO running on a Protectli BUT they only have 1GB network.....

          J 1 Reply Last reply Reply Quote 0
          • J Offline
            john.c @manilx
            last edited by john.c

            @manilx said in Epyc VM to VM networking slow:

            @john-c You mean on another host not belonging to the "EPYC" pool? Could try that, I have XO running on a Protectli BUT they only have 1GB network.....

            Yes. If you can do it on a Protectli with a 4 Port or 6 Port they can potentially have 2.5 GB/s Lan or even 10 GB/s LAN if using an SFP+ module.

            M 1 Reply Last reply Reply Quote 0
            • M Offline
              manilx @john.c
              last edited by

              @john-c Don't have one of those @office only 2 @homelab and needed there.
              BUT I have a Minisforum NPB7 with 2,5GB NIC's.
              Will install xcpng later today and try tomorrow....

              J 1 Reply Last reply Reply Quote 0
              • J Offline
                john.c @manilx
                last edited by john.c

                @manilx said in Epyc VM to VM networking slow:

                @john-c Don't have one of those @office only 2 @homelab and needed there.
                BUT I have a Minisforum NPB7 with 2,5GB NIC's.
                Will install xcpng later today and try tomorrow....

                You can even do it on another actual server as long as it is outside of the EPYC server pool(s) and best preferred as a non-affected CPU. As this will force it to use a physical interface.

                M 1 Reply Last reply Reply Quote 0
                • M Offline
                  manilx @john.c
                  last edited by

                  @john-c Could do it to our DR-pools on older Intel hw.

                  But as it is I prefer to try the Minisforum, which is fast and I can connect it to the same switches as the Business Epyc pool.

                  Will revert tomorrow with results.

                  J 1 Reply Last reply Reply Quote 0
                  • J Offline
                    john.c @manilx
                    last edited by john.c

                    @manilx said in Epyc VM to VM networking slow:

                    @john-c Could do it to our DR-pools on older Intel hw.

                    But as it is I prefer to try the Minisforum, which is fast and I can connect it to the same switches as the Business Epyc pool.

                    Will revert tomorrow with results.

                    Depending on the results another server (or an additional one) like the Intel ones from DR Pool or a modern Intel based CPU server hardware to host it would be un-affected. Then have the AMD EPYC pools connect to the Intel server with the Intel hosting XO/XOA. With the affinity for XO/XOA VM set to the Intel based host for the duration of the EPYC bug.

                    S M 2 Replies Last reply Reply Quote 0
                    • S Offline
                      Seneram @john.c
                      last edited by

                      @john-c That would only work for someone where it is possible to have non epyc machines handle this tho 😄 Unfortunately not an option for us.

                      J 1 Reply Last reply Reply Quote 0
                      • J Offline
                        john.c @Seneram
                        last edited by john.c

                        @Seneram said in Epyc VM to VM networking slow:

                        @john-c That would only work for someone where it is possible to have non epyc machines handle this tho 😄 Unfortunately not an option for us.

                        So would need to be a separate host outside of the other EPYC pool(s), so in other words can be another AMD or an Intel. But best Intel, though as long as its outside of all the other pools and just hosting the XO/XOA VM.

                        Then have the other EPYC pool(s) connect to the XO/XOA VM on its separate hosting system. That way there will be less of an impact, as the other EPYC servers will have to use their physical NICs in order to connect to XO/XOA.

                        1 Reply Last reply Reply Quote 0
                        • M Offline
                          manilx @john.c
                          last edited by

                          @john-c If this Works the 600€ Minisforum will have a new job! Sitting idle now in the cabinet.

                          J 1 Reply Last reply Reply Quote 0
                          • J Offline
                            john.c @manilx
                            last edited by john.c

                            @manilx said in Epyc VM to VM networking slow:

                            @john-c If this Works the 600€ Minisforum will have a new job! Sitting idle now in the cabinet.

                            Depending on results of the test I would recommend an actual server hardware for hosting the XO/XOA VM as the server grade hardware receive more QA, than desktops, laptops and mini computers. So you may need to get an additional server hardware to use for XO/XOA VM.

                            Also actual server hardware also have out of band management controllers (BMCs). The non server grade hardware often don't have this functionality so remotely managing, and monitoring them is much harder or even impossible.

                            Finally on top of these they (server hardware) are more likely to be on the XenServer HCL so likely to get the paid support from Vates, through it.

                            M 2 Replies Last reply Reply Quote 0
                            • J Offline
                              john.c
                              last edited by john.c

                              @olivierlambert The user manilx is going to try a test with running XO/XOA on an Intel based host, to see what the results are like with the EPYC pool(s) connecting to it.

                              They're going run the test tomorrow and report back with results. I have posted my recommendation (opinion) above to maybe have a Intel based server grade hardware host running, the XO/XOA VM for the duration of the EPYC VM to VM networking bug.

                              What do you think?

                              1 Reply Last reply Reply Quote 0
                              • M Offline
                                manilx @john.c
                                last edited by

                                @john-c Minisforum NPB7 all set up with xcp-ng 8.2 LTE
                                Ready to connect to network tomorrow. Will have results before 10:00 GMT.

                                1 Reply Last reply Reply Quote 0
                                • M Offline
                                  manilx @john.c
                                  last edited by

                                  @john-c Let's see what the test will do. If this fixes it I will remove one host (HPE ProLiant DL360 Gen10) out of the 3 host DR pool and dedicate it to this.

                                  M 1 Reply Last reply Reply Quote 0
                                  • M Offline
                                    manilx @manilx
                                    last edited by

                                    @john-c I did a quick test. Installed XO on our DR pool. Run a new full backup, NIC's are 1G only, BUT the backup fully saturates the NIC, double the performance of XOA running on EPYC hosts!!

                                    Tomorrow as said I'll run on Minisforum with 2,5G NIC's. Should saturate them also (does @homelab).

                                    J 1 Reply Last reply Reply Quote 0
                                    • J Offline
                                      john.c @manilx
                                      last edited by john.c

                                      @manilx said in Epyc VM to VM networking slow:

                                      @john-c I did a quick test. Installed XO on our DR pool. Run a new full backup, NIC's are 1G only, BUT the backup fully saturates the NIC, double the performance of XOA running on EPYC hosts!!

                                      Tomorrow as said I'll run on Minisforum with 2,5G NIC's. Should saturate them also (does @homelab).

                                      If the HPE ProLiant DL360 Gen10 when dedicated were to receive a 10 GB/s PCIe Ethernet NIC it would increase that further. Especially if its a 4 port version of that 10 GB/s PCIe NIC with two pairs of each port in 2 LACP bonds.

                                      M 1 Reply Last reply Reply Quote 0
                                      • M Offline
                                        manilx @john.c
                                        last edited by manilx

                                        @john-c If the test goes well, this is what we'll do. Buy a 10G NIC (4 ports)

                                        J M 2 Replies Last reply Reply Quote 0
                                        • J Offline
                                          john.c @manilx
                                          last edited by john.c

                                          @manilx said in Epyc VM to VM networking slow:

                                          @john-c If the test goes well, this is what we'll do. Buy a 10G NIC (2 ports)

                                          I've bonded a 4 Port NIC (though in my case 1GB/s only as its all my home network can handle), into 2 LACP bonds. But if done with a 10G NIC 4 Port it will really fly with 2 pair LACP (IEE802.3ad) bonds. When I refer to 2 pair LACP (IEE802.3ad) bonds, its meaning 2 10G ports per bond, with 2 bonds; to really unleash its performance.

                                          For example 536FLR FlexFabric 10Gb 4-port adapter from HPE is a 4 port 10G NIC, or another suitable NIC with 4 ports. Though can be a 2 port 10G NIC (without LACP - due to 1 port for each network), but will thus have less of a speed increase effect.

                                          https://www.hpe.com/psnow/doc/c04939487.html?ver=13
                                          https://support.hpe.com/hpesc/public/docDisplay?docId=a00018804en_us&docLocale=en_US

                                          M 1 Reply Last reply Reply Quote 0
                                          • M Offline
                                            manilx @manilx
                                            last edited by

                                            @manilx P.S. Seem like the slow backup speed WAS related to the EPYC bug as I suggested a while ago.....
                                            Should have tested this "workaround" a LONG time ago 😞

                                            1 Reply Last reply Reply Quote 0
                                            • First post
                                              Last post