XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    Epyc VM to VM networking slow

    Scheduled Pinned Locked Moved Compute
    237 Posts 25 Posters 113.3k Views 28 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • linuxmooseL Offline
      linuxmoose @planedrop
      last edited by

      @planedrop It is a mix of anything and everything one would find in an enterprise datacenter. Lots of application server to database server connections, and we are also running Rancher with Longhorn storage, which is particularly sensitive to latency, but mostly of the storage type - not networking latency. We will just have to test and see if it is indeed an issue. If I understand correctly, the main issue is with performance between VMs on the same virtualization host. In that case, we can use rules to place application and db servers on separate hosts for better performance. Ironically, that is the opposite of the configuration we currently use with VMware.
      Anyway, we will just have to do some testing to see if it is an issue worth stressing over for us.
      Thanks.

      planedropP 1 Reply Last reply Reply Quote 1
      • linuxmooseL Offline
        linuxmoose @olivierlambert
        last edited by

        @olivierlambert said in Epyc VM to VM networking slow:

        I feel that this is going to be a much larger issue for us.

        Before that I would strongly encourage to test if it's really a problem, because it's really not in 90% of the use cases we've seen.

        Thanks @olivierlambert - that is definitely the plan. I still see XCP-ng as the best alternative we've considered so far.

        1 Reply Last reply Reply Quote 1
        • planedropP Offline
          planedrop Top contributor @linuxmoose
          last edited by

          @linuxmoose Yeah testing it is definitely the way to go here, I don't think you'll see very many issues TBH.

          It's worth noting that the speeds being seen were still multi gigabit, so again it's not like things are dead slow.

          1 Reply Last reply Reply Quote 0
          • olivierlambertO Offline
            olivierlambert Vates 🪐 Co-Founder CEO
            last edited by

            In the mean time, we'll keep you posted in here on our patches to test. So far, it's likely a patch in the guest (ie the Linux kernel) that might improve nicely the performance. And if you need very high NIC perf, you can use SR-IOV and enjoy native NIC perf, even if there's a some extra constraints.

            planedropP 1 Reply Last reply Reply Quote 1
            • planedropP Offline
              planedrop Top contributor @olivierlambert
              last edited by

              @olivierlambert That's a good point about SR-IOV, would be a good workaround if super fast NIC speeds are needed in a guest specifically.

              ForzaF 1 Reply Last reply Reply Quote 1
              • ForzaF Offline
                Forza @planedrop
                last edited by

                Would sr-iov with xoa help backup speeds?

                J planedropP 2 Replies Last reply Reply Quote 0
                • J Offline
                  JamesG @Forza
                  last edited by

                  @Forza said in Epyc VM to VM networking slow:

                  Would sr-iov with xoa help backup speeds?

                  If you specify the SR-IOV NIC, it will be wire-speed.

                  1 Reply Last reply Reply Quote 0
                  • planedropP Offline
                    planedrop Top contributor @Forza
                    last edited by

                    @Forza The XOA backup performance is more related to processing and not network, at least as I understand it and have tested.

                    So I don't think you'll see much of a change there.

                    1 Reply Last reply Reply Quote 0
                    • olivierlambertO Offline
                      olivierlambert Vates 🪐 Co-Founder CEO
                      last edited by

                      On Intel, the biggest bottleneck ATM is the export speed capability of the Dom0. On AMD, the backup speed is also affected by the lack if equivalent of iPAT in Intel, but it might depends also on other factors (backup repo speed etc.)

                      planedropP 1 Reply Last reply Reply Quote 1
                      • planedropP Offline
                        planedrop Top contributor @olivierlambert
                        last edited by

                        @olivierlambert Yeah so far backups have been fast enough to not pose some huge issue though.

                        IMO if you have a huge VM (many TB) it should just be dealt with on a NAS or something instead of a VHD.

                        Still glad that qcow2 is coming though!

                        1 Reply Last reply Reply Quote 0
                        • K Offline
                          ksyblast
                          last edited by

                          Hello everyone!

                          It looks like we are also affected with the issue. A week of investigation led us to this thread and our test results are very close to what is described here.

                          We have VM routers based on OEL8 (tested with all available kernels), xcp-ng 8.2, AMD EPYC 7443P 24-Core Processor, NICs: BCM57416 NetXtreme-E Dual-Media 10G RDMA Ethernet Controller

                          Network performance has significantly degraded after moving the VMs to the EPYC hosts, tests were performed with iperf:

                          1. iperf hosts outside of the hypevisor, VM-router on the hypervisor:
                            UDP and TCP: ~1.5Gbps
                          2. both iperf hosts and a router on the hypervisor:
                            UDP ~1.5Gbps, TCP ~7Gbps

                          We have tried SR-IOV and made a test with one 10Gb NIC and got ~9Gbps with TCP and around 4Gbps with UDP.

                          SR-IOV however seems not usable for us since it looks like we cannot use it over LACP which is required for other VMs redundancy. Alternatively we need to use some additional NICs to use with SR-IOV on our routers or seek for some other connection options within our datacenter.

                          1 Reply Last reply Reply Quote 0
                          • olivierlambertO Offline
                            olivierlambert Vates 🪐 Co-Founder CEO
                            last edited by

                            Hello everyone!

                            Great news, now you have something to test: https://xcp-ng.org/forum/topic/10943/network-traffic-performance-on-amd-processors

                            Please go there, follow instruction carefully and report please!

                            planedropP 1 Reply Last reply Reply Quote 2
                            • planedropP Offline
                              planedrop Top contributor @olivierlambert
                              last edited by

                              @olivierlambert This is great, thanks for letting us know! I'll give this a shot in my lab as soon as I can.

                              1 Reply Last reply Reply Quote 1
                              • First post
                                Last post