XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    Epyc VM to VM networking slow

    Scheduled Pinned Locked Moved Compute
    206 Posts 23 Posters 101.5k Views 26 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • M Offline
      manilx @manilx
      last edited by

      @manilx P.S. Seem like the slow backup speed WAS related to the EPYC bug as I suggested a while ago.....
      Should have tested this "workaround" a LONG time ago 😞

      1 Reply Last reply Reply Quote 0
      • M Offline
        manilx @john.c
        last edited by manilx

        @john-c Trued to test a new backup on the NPB7.

        I restored the latest XOA backup to that host (didn't want to move the original one from the business pool).
        On trying to test the backup I get: Error: feature Unauthorized
        ???

        I've spun up a XO instance in the meantime on the Intel test host and imported settings from XOA.

        Run the same backup on both hosts to the same QNAP.

        NPB7 Intel host connected via 2,5G:
        ScreenShot 2024-10-24 at 09.59.47.png

        HP EPYC host connected via 10G:
        ScreenShot 2024-10-24 at 10.23.06.png

        The difference is apparent!

        I will now dettach the older HP from the backup pool and install it as an isolated pool and run XOA from there. Will order 10G NIC's

        Now why does XOA error with "Error: feature Unauthorized" when I try to run backups from there??

        J 1 Reply Last reply Reply Quote 0
        • J Offline
          john.c @manilx
          last edited by john.c

          @manilx said in Epyc VM to VM networking slow:

          @john-c Trued to test a new backup on the NPB7.

          I restored the latest XOA backup to that host (didn't want to move the original one from the business pool).
          On trying to test the backup I get: Error: feature Unauthorized
          ???

          I've spun up a XO instance in the meantime on the Intel test host and imported settings from XOA.

          Run the same backup on both hosts to the same QNAP.

          NPB7 Intel host connected via 2,5G:
          ScreenShot 2024-10-24 at 09.59.47.png

          HP EPYC host connected via 10G:
          ScreenShot 2024-10-24 at 10.23.06.png

          The difference is apparent!

          I will now dettach the older HP from the backup pool and install it as an isolated pool and run XOA from there. Will order 10G NIC's

          Now why does XOA error with "Error: feature Unauthorized" when I try to run backups from there??

          It's likely because the license is attached to the EPYC instance of the XO/XOA. The license can only be bound to one appliance at a time, and is currently bound to the EPYC instance. Your HPE Intel instance is only available as unlicensed or the Free Edition, until license is re-bound from the EPYC pool's instance.

          Anyway overnight I realised to maintain the the availability of XO/XOA, during updates on the host it would need a second host for the dedicated server to join its pool. This would allow for RPU on the XO/XOA host when updating its XCP-ng instance.

          https://xen-orchestra.com/docs/license_management.html#rebind-xo-license

          M 1 Reply Last reply Reply Quote 0
          • M Offline
            manilx @john.c
            last edited by

            @john-c License moved. All fine.

            Backup Intel host running on 1G NIC's (for the time being) bonded lacp.

            Already faster than before.

            I have an XO instance running on a Proxmox host to be able to manage the pools when the main XOA is down (updates etc), so I'm good there and don't need another (2nd) backup host (would be crazy overkill).

            J 1 Reply Last reply Reply Quote 0
            • J Offline
              john.c @manilx
              last edited by john.c

              @manilx said in Epyc VM to VM networking slow:

              @john-c License moved. All fine.

              Backup Intel host running on 1G NIC's (for the time being) bonded lacp.

              Already faster than before.

              I have an XO instance running on a Proxmox host to be able to manage the pools when the main XOA is down (updates etc), so I'm good there and don't need another (2nd) backup host (would be crazy overkill).

              I mean have the Proxmox host as XCP-ng then and have it join the XO/XOA's pool, preferably if they are they same in hardware, components. That way when the HPE ProLiant DL360 Gen10 is down for updates, the XO/XOA VM can migrate between them live as required. So you can have RPU on the dedicated XO/XOA Intel based hosts.

              M 1 Reply Last reply Reply Quote 0
              • M Offline
                manilx @john.c
                last edited by

                @john-c Proxmox host is a Protectli. All good. XOA will be on the single Intel host pool, no need for redundancy here.
                XO on Proxmox for emergencies.....

                Remember: this is ALL a WORKAROUND for the stupid AMD EPYC bug!!!!!!
                Not in the least the final solution.

                The final is XOA running on our EPYC production pool as it was

                J 1 Reply Last reply Reply Quote 0
                • J Offline
                  john.c @manilx
                  last edited by

                  @manilx said in Epyc VM to VM networking slow:

                  @john-c Proxmox host is a Protectli. All good. XOA will be on the single Intel host pool, no need for redundancy here.
                  XO on Proxmox for emergencies.....

                  Remember: this is ALL a WORKAROUND for the stupid AMD EPYC bug!!!!!!
                  Not in the least the final solution.

                  The final is XOA running on our EPYC production pool as it was

                  Alright in that case just the HPE ProLiant DL360 Gen10 as dedicated XO/XOA host. But bear in mind that when its updating the XCP-ng installed on it, the host will be unavailable and thus that instance of XO/XOA until the booting after reboot is complete.

                  M 1 Reply Last reply Reply Quote 0
                  • M Offline
                    manilx @john.c
                    last edited by

                    @john-c Yes, obviously. For that I have XO on a mini-pc 🙂

                    M 1 Reply Last reply Reply Quote 0
                    • M Offline
                      manilx @manilx
                      last edited by manilx

                      @john-c @olivierlambert ScreenShot 2024-10-24 at 13.10.42.png
                      One of our standard backup jobs. This is a 100% increase!!! On 1G lacp bond. Instead of 10G on EPYC host!

                      1,5 yrs battling with this and in the end it's all due to the same issue as we now see.

                      S J 2 Replies Last reply Reply Quote 0
                      • S Offline
                        Seneram @manilx
                        last edited by

                        @manilx it is deffo interesting to see more proof that this but may be wider than expected.

                        1 Reply Last reply Reply Quote 0
                        • J Offline
                          john.c @manilx
                          last edited by

                          @manilx said in Epyc VM to VM networking slow:

                          @john-c @olivierlambert ScreenShot 2024-10-24 at 13.10.42.png
                          One of our standard backup jobs. This is a 100% increase!!! On 1G lacp bond. Instead of 10G on EPYC host!

                          1,5 yrs battling with this and in the end it's all due to the same issue as we now see.

                          Don't forget to also post the comparison and screenshot when you have fitted the 10G 4 Port NIC with the 2 lacp bonds on the Intel HPE!

                          M 1 Reply Last reply Reply Quote 0
                          • M Offline
                            manilx @john.c
                            last edited by manilx

                            @john-c WILL DO! I've told purchasing to order the card you recommended. Let's see how long that'll take....

                            EDIT: ordered on Amazon. Expect to be here 1st week Nov.

                            Will report back then pinging you.

                            J 1 Reply Last reply Reply Quote 0
                            • J Offline
                              john.c @manilx
                              last edited by john.c

                              @manilx said in Epyc VM to VM networking slow:

                              @john-c WILL DO! I've told purchasing to order the card you recommended. Let's see how long that'll take....

                              EDIT: ordered on Amazon. Expect to be here 1st week Nov.

                              Will report back then pinging you.

                              The Friday (1st November 2024) after this months update to Xen Orchestra or is it the week after?

                              1 Reply Last reply Reply Quote 0
                              • J Offline
                                john.c
                                last edited by

                                @manilx I've been waiting for your ping back with the report. Following you saying the first week in November 2024, now in the beginning of the 2nd week in November 2024.

                                I'm wondering how's it going please, anything holding it up?

                                M 1 Reply Last reply Reply Quote 0
                                • M Offline
                                  manilx @john.c
                                  last edited by

                                  @john-c Hi. Ordered from Amazon that day and after more than 2 weeks order was cancelled without notice from supplier. Reordered from another one and I'm still waiting....
                                  Not easy to get one.

                                  J 1 Reply Last reply Reply Quote 0
                                  • J Offline
                                    john.c @manilx
                                    last edited by john.c

                                    @manilx said in Epyc VM to VM networking slow:

                                    @john-c Hi. Ordered from Amazon that day and after more than 2 weeks order was cancelled without notice from supplier. Reordered from another one and I'm still waiting....
                                    Not easy to get one.

                                    Thanks for your reply. I hope it goes well this time, anyway if it still proves difficult then you can go for another quad port 10Gbe NIC which is compatible to do the LACP 2 bond with.

                                    If the selected quad port 10Gbe NIC is available on general sale, then you can get it through the supplier who provided you with your HPE Care Packs.

                                    M 1 Reply Last reply Reply Quote 0
                                    • M Offline
                                      manilx @john.c
                                      last edited by manilx

                                      @john-c @olivierlambert Now we're talking!!!

                                      Here are the results of a 2 VM Delta/NBD backup (initial one) using 2 10GB NICS in bond:

                                      ScreenShot 2024-11-18 at 13.22.56.png

                                      WHAT a difference, when we run XOA on an Intel host instead of an EPYC one, with backups.

                                      I've told this from the beginning, that the slow backup speeds were due to the EPYC issue (as I got 200+ MB/s @home with measily Protectli's on 10G)

                                      Looking on what the Synology gets: I get up to 500+ MB/s during the backup!

                                      1 Reply Last reply Reply Quote 0
                                      • ForzaF Online
                                        Forza
                                        last edited by Forza

                                        It's a good discovery that having XOA outside the pool can make the backup performance much better.

                                        How is the problem solving going for the root cause? We too have quite poor network performance and would really like to see the end of this. Can we get a summary of the actions taken so far and what the prognosis is for a solution?

                                        Did anyone try plain Xen on a new 6.x kernel to see if the networking is the same there?

                                        1 Reply Last reply Reply Quote 1
                                        • TeddyAstieT Offline
                                          TeddyAstie Vates 🪐 XCP-ng Team Xen Guru
                                          last edited by

                                          Is there a difference when running iperf3 with -C bbr flag on the client side ?
                                          In my testing with AMD EPYC and some other CPUs, results are more consistent overall with BBR, and better on the AMD EPYC side (but no miracle, it's still far from perfect).

                                          AMD EPYC 7262
                                          vm to vm, 4 threads, iperf3
                                          Without BBR : 4.5-6 Gbps (sometimes more; varies a lot)
                                          With BBR : 7-8 Gbps

                                          ForzaF 1 Reply Last reply Reply Quote 0
                                          • ForzaF Online
                                            Forza @TeddyAstie
                                            last edited by Forza

                                            @TeddyAstie That is interesting. I had a look. The default seems to be cubic, but bbr is available using modprobe tcp_bbr. I also wonder if different queuing disciplines (tc qdisc) can help. For example mqprio that spreads packes across the available NIC HW queues?

                                            TeddyAstieT 1 Reply Last reply Reply Quote 0
                                            • First post
                                              Last post