XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    Epyc VM to VM networking slow

    Scheduled Pinned Locked Moved Compute
    206 Posts 23 Posters 101.2k Views 26 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • J Offline
      john.c @manilx
      last edited by john.c

      @manilx said in Epyc VM to VM networking slow:

      @john-c Don't have one of those @office only 2 @homelab and needed there.
      BUT I have a Minisforum NPB7 with 2,5GB NIC's.
      Will install xcpng later today and try tomorrow....

      You can even do it on another actual server as long as it is outside of the EPYC server pool(s) and best preferred as a non-affected CPU. As this will force it to use a physical interface.

      M 1 Reply Last reply Reply Quote 0
      • M Online
        manilx @john.c
        last edited by

        @john-c Could do it to our DR-pools on older Intel hw.

        But as it is I prefer to try the Minisforum, which is fast and I can connect it to the same switches as the Business Epyc pool.

        Will revert tomorrow with results.

        J 1 Reply Last reply Reply Quote 0
        • J Offline
          john.c @manilx
          last edited by john.c

          @manilx said in Epyc VM to VM networking slow:

          @john-c Could do it to our DR-pools on older Intel hw.

          But as it is I prefer to try the Minisforum, which is fast and I can connect it to the same switches as the Business Epyc pool.

          Will revert tomorrow with results.

          Depending on the results another server (or an additional one) like the Intel ones from DR Pool or a modern Intel based CPU server hardware to host it would be un-affected. Then have the AMD EPYC pools connect to the Intel server with the Intel hosting XO/XOA. With the affinity for XO/XOA VM set to the Intel based host for the duration of the EPYC bug.

          S M 2 Replies Last reply Reply Quote 0
          • S Offline
            Seneram @john.c
            last edited by

            @john-c That would only work for someone where it is possible to have non epyc machines handle this tho 😄 Unfortunately not an option for us.

            J 1 Reply Last reply Reply Quote 0
            • J Offline
              john.c @Seneram
              last edited by john.c

              @Seneram said in Epyc VM to VM networking slow:

              @john-c That would only work for someone where it is possible to have non epyc machines handle this tho 😄 Unfortunately not an option for us.

              So would need to be a separate host outside of the other EPYC pool(s), so in other words can be another AMD or an Intel. But best Intel, though as long as its outside of all the other pools and just hosting the XO/XOA VM.

              Then have the other EPYC pool(s) connect to the XO/XOA VM on its separate hosting system. That way there will be less of an impact, as the other EPYC servers will have to use their physical NICs in order to connect to XO/XOA.

              1 Reply Last reply Reply Quote 0
              • M Online
                manilx @john.c
                last edited by

                @john-c If this Works the 600€ Minisforum will have a new job! Sitting idle now in the cabinet.

                J 1 Reply Last reply Reply Quote 0
                • J Offline
                  john.c @manilx
                  last edited by john.c

                  @manilx said in Epyc VM to VM networking slow:

                  @john-c If this Works the 600€ Minisforum will have a new job! Sitting idle now in the cabinet.

                  Depending on results of the test I would recommend an actual server hardware for hosting the XO/XOA VM as the server grade hardware receive more QA, than desktops, laptops and mini computers. So you may need to get an additional server hardware to use for XO/XOA VM.

                  Also actual server hardware also have out of band management controllers (BMCs). The non server grade hardware often don't have this functionality so remotely managing, and monitoring them is much harder or even impossible.

                  Finally on top of these they (server hardware) are more likely to be on the XenServer HCL so likely to get the paid support from Vates, through it.

                  M 2 Replies Last reply Reply Quote 0
                  • J Offline
                    john.c
                    last edited by john.c

                    @olivierlambert The user manilx is going to try a test with running XO/XOA on an Intel based host, to see what the results are like with the EPYC pool(s) connecting to it.

                    They're going run the test tomorrow and report back with results. I have posted my recommendation (opinion) above to maybe have a Intel based server grade hardware host running, the XO/XOA VM for the duration of the EPYC VM to VM networking bug.

                    What do you think?

                    1 Reply Last reply Reply Quote 0
                    • M Online
                      manilx @john.c
                      last edited by

                      @john-c Minisforum NPB7 all set up with xcp-ng 8.2 LTE
                      Ready to connect to network tomorrow. Will have results before 10:00 GMT.

                      1 Reply Last reply Reply Quote 0
                      • M Online
                        manilx @john.c
                        last edited by

                        @john-c Let's see what the test will do. If this fixes it I will remove one host (HPE ProLiant DL360 Gen10) out of the 3 host DR pool and dedicate it to this.

                        M 1 Reply Last reply Reply Quote 0
                        • M Online
                          manilx @manilx
                          last edited by

                          @john-c I did a quick test. Installed XO on our DR pool. Run a new full backup, NIC's are 1G only, BUT the backup fully saturates the NIC, double the performance of XOA running on EPYC hosts!!

                          Tomorrow as said I'll run on Minisforum with 2,5G NIC's. Should saturate them also (does @homelab).

                          J 1 Reply Last reply Reply Quote 0
                          • J Offline
                            john.c @manilx
                            last edited by john.c

                            @manilx said in Epyc VM to VM networking slow:

                            @john-c I did a quick test. Installed XO on our DR pool. Run a new full backup, NIC's are 1G only, BUT the backup fully saturates the NIC, double the performance of XOA running on EPYC hosts!!

                            Tomorrow as said I'll run on Minisforum with 2,5G NIC's. Should saturate them also (does @homelab).

                            If the HPE ProLiant DL360 Gen10 when dedicated were to receive a 10 GB/s PCIe Ethernet NIC it would increase that further. Especially if its a 4 port version of that 10 GB/s PCIe NIC with two pairs of each port in 2 LACP bonds.

                            M 1 Reply Last reply Reply Quote 0
                            • M Online
                              manilx @john.c
                              last edited by manilx

                              @john-c If the test goes well, this is what we'll do. Buy a 10G NIC (4 ports)

                              J M 2 Replies Last reply Reply Quote 0
                              • J Offline
                                john.c @manilx
                                last edited by john.c

                                @manilx said in Epyc VM to VM networking slow:

                                @john-c If the test goes well, this is what we'll do. Buy a 10G NIC (2 ports)

                                I've bonded a 4 Port NIC (though in my case 1GB/s only as its all my home network can handle), into 2 LACP bonds. But if done with a 10G NIC 4 Port it will really fly with 2 pair LACP (IEE802.3ad) bonds. When I refer to 2 pair LACP (IEE802.3ad) bonds, its meaning 2 10G ports per bond, with 2 bonds; to really unleash its performance.

                                For example 536FLR FlexFabric 10Gb 4-port adapter from HPE is a 4 port 10G NIC, or another suitable NIC with 4 ports. Though can be a 2 port 10G NIC (without LACP - due to 1 port for each network), but will thus have less of a speed increase effect.

                                https://www.hpe.com/psnow/doc/c04939487.html?ver=13
                                https://support.hpe.com/hpesc/public/docDisplay?docId=a00018804en_us&docLocale=en_US

                                M 1 Reply Last reply Reply Quote 0
                                • M Online
                                  manilx @manilx
                                  last edited by

                                  @manilx P.S. Seem like the slow backup speed WAS related to the EPYC bug as I suggested a while ago.....
                                  Should have tested this "workaround" a LONG time ago 😞

                                  1 Reply Last reply Reply Quote 0
                                  • M Online
                                    manilx @john.c
                                    last edited by manilx

                                    @john-c Trued to test a new backup on the NPB7.

                                    I restored the latest XOA backup to that host (didn't want to move the original one from the business pool).
                                    On trying to test the backup I get: Error: feature Unauthorized
                                    ???

                                    I've spun up a XO instance in the meantime on the Intel test host and imported settings from XOA.

                                    Run the same backup on both hosts to the same QNAP.

                                    NPB7 Intel host connected via 2,5G:
                                    ScreenShot 2024-10-24 at 09.59.47.png

                                    HP EPYC host connected via 10G:
                                    ScreenShot 2024-10-24 at 10.23.06.png

                                    The difference is apparent!

                                    I will now dettach the older HP from the backup pool and install it as an isolated pool and run XOA from there. Will order 10G NIC's

                                    Now why does XOA error with "Error: feature Unauthorized" when I try to run backups from there??

                                    J 1 Reply Last reply Reply Quote 0
                                    • J Offline
                                      john.c @manilx
                                      last edited by john.c

                                      @manilx said in Epyc VM to VM networking slow:

                                      @john-c Trued to test a new backup on the NPB7.

                                      I restored the latest XOA backup to that host (didn't want to move the original one from the business pool).
                                      On trying to test the backup I get: Error: feature Unauthorized
                                      ???

                                      I've spun up a XO instance in the meantime on the Intel test host and imported settings from XOA.

                                      Run the same backup on both hosts to the same QNAP.

                                      NPB7 Intel host connected via 2,5G:
                                      ScreenShot 2024-10-24 at 09.59.47.png

                                      HP EPYC host connected via 10G:
                                      ScreenShot 2024-10-24 at 10.23.06.png

                                      The difference is apparent!

                                      I will now dettach the older HP from the backup pool and install it as an isolated pool and run XOA from there. Will order 10G NIC's

                                      Now why does XOA error with "Error: feature Unauthorized" when I try to run backups from there??

                                      It's likely because the license is attached to the EPYC instance of the XO/XOA. The license can only be bound to one appliance at a time, and is currently bound to the EPYC instance. Your HPE Intel instance is only available as unlicensed or the Free Edition, until license is re-bound from the EPYC pool's instance.

                                      Anyway overnight I realised to maintain the the availability of XO/XOA, during updates on the host it would need a second host for the dedicated server to join its pool. This would allow for RPU on the XO/XOA host when updating its XCP-ng instance.

                                      https://xen-orchestra.com/docs/license_management.html#rebind-xo-license

                                      M 1 Reply Last reply Reply Quote 0
                                      • M Online
                                        manilx @john.c
                                        last edited by

                                        @john-c License moved. All fine.

                                        Backup Intel host running on 1G NIC's (for the time being) bonded lacp.

                                        Already faster than before.

                                        I have an XO instance running on a Proxmox host to be able to manage the pools when the main XOA is down (updates etc), so I'm good there and don't need another (2nd) backup host (would be crazy overkill).

                                        J 1 Reply Last reply Reply Quote 0
                                        • J Offline
                                          john.c @manilx
                                          last edited by john.c

                                          @manilx said in Epyc VM to VM networking slow:

                                          @john-c License moved. All fine.

                                          Backup Intel host running on 1G NIC's (for the time being) bonded lacp.

                                          Already faster than before.

                                          I have an XO instance running on a Proxmox host to be able to manage the pools when the main XOA is down (updates etc), so I'm good there and don't need another (2nd) backup host (would be crazy overkill).

                                          I mean have the Proxmox host as XCP-ng then and have it join the XO/XOA's pool, preferably if they are they same in hardware, components. That way when the HPE ProLiant DL360 Gen10 is down for updates, the XO/XOA VM can migrate between them live as required. So you can have RPU on the dedicated XO/XOA Intel based hosts.

                                          M 1 Reply Last reply Reply Quote 0
                                          • M Online
                                            manilx @john.c
                                            last edited by

                                            @john-c Proxmox host is a Protectli. All good. XOA will be on the single Intel host pool, no need for redundancy here.
                                            XO on Proxmox for emergencies.....

                                            Remember: this is ALL a WORKAROUND for the stupid AMD EPYC bug!!!!!!
                                            Not in the least the final solution.

                                            The final is XOA running on our EPYC production pool as it was

                                            J 1 Reply Last reply Reply Quote 0
                                            • First post
                                              Last post