XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    Epyc VM to VM networking slow

    Scheduled Pinned Locked Moved Compute
    206 Posts 23 Posters 101.2k Views 26 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • J Offline
      JamesG @JamesG
      last edited by

      A note...

      I'm running a single 16 core, 32 thread second-gen Epyc.
      Nicols is running a dual proc, 24 core, 48 thread third-gen Epyc.

      My base clock rate is 3.0Ghz. His is 2.9Ghz.

      The improved caching and memory handling in the third-gen Epyc should be behaving better than my second gen CPU's, but generally speaking, our performance seems to be the same.

      1 Reply Last reply Reply Quote 0
      • olivierlambertO Offline
        olivierlambert Vates πŸͺ Co-Founder CEO
        last edited by

        He said he could reach 18gbits between 2x Windows VM, if I remember correctly.

        I wonder about the guest kernel too (Debian 11 vs 12)

        J 1 Reply Last reply Reply Quote 0
        • J Offline
          JamesG @olivierlambert
          last edited by

          @olivierlambert With a billion threads.

          Anyway...

          I'm most definitely a willing subject to help get this resolved. Heck..I'll even give you guys access to the environment to do whatever you want to do. I would just like to see this get fixed.

          With that...You guys tell me. What tests do you want run and do you want access to the environment to do your own thing with it?

          N 1 Reply Last reply Reply Quote 0
          • olivierlambertO Offline
            olivierlambert Vates πŸͺ Co-Founder CEO
            last edited by

            Can you reproduce the same speed he got on your side? Because first we need a consistent result between multiple people to be sure it's not platform related (ie Supermicro or something).

            The only reason we could think it's not normal is the difference between Intel and AMD, that shouldn't be that huge. Or maybe AMD CPU are a lot slower with memcpy()? πŸ€”

            J DanpD 2 Replies Last reply Reply Quote 0
            • N Offline
              nicols @JamesG
              last edited by

              @JamesG said in Epyc VM to VM networking slow:

              @olivierlambert With a billion threads.

              nope, Win 10 VM with 4 or 8 VCPU and 8GB RAM
              But, with billion threads in Linux VM, speed increases up to 8 threats, then it drops again.

              N 1 Reply Last reply Reply Quote 0
              • J Offline
                JamesG @olivierlambert
                last edited by

                @olivierlambert For single threaded iperf....Yes. Our speeds match 100%. Which is half the transfer rate of a single threaded iperf on 12 year-old Xeon E3 hardware.

                I understand that we've had lots of security issues in the past decade and several steps have been taken to protect and isolate the memory inside all virtualization platforms. When I first built my E3-1230 Xeon system for homelab, VM to VM iperfs were like 20Gb/s. Nowadays that's significantly slowed down.

                Anyway...I just find it hard to believe that with as superior a computing platform as Epyc is, that the single-threaded iperf is so much slwoer than 12-year-old entry level Intel CPUs.

                Maybe I should load VMWare on this system and see how it does and report back. Same hardware, but different hypervisor, and compare notes.

                1 Reply Last reply Reply Quote 0
                • N Offline
                  nicols @nicols
                  last edited by

                  @nicols said in Epyc VM to VM networking slow:

                  @JamesG said in Epyc VM to VM networking slow:

                  @olivierlambert With a billion threads.

                  nope, Win 10 VM with 4 or 8 VCPU and 8GB RAM
                  But, with billion threads in Linux VM, speed increases up to 8 threats, then it drops again.

                  This is with 1 and 16 threads:

                  https://nextcloud.openit.hr/s/BYGK2yjQziEMKww

                  N 1 Reply Last reply Reply Quote 0
                  • N Offline
                    nicols @nicols
                    last edited by

                    @nicols said in Epyc VM to VM networking slow:

                    @nicols said in Epyc VM to VM networking slow:

                    @JamesG said in Epyc VM to VM networking slow:

                    @olivierlambert With a billion threads.

                    nope, Win 10 VM with 4 or 8 VCPU and 8GB RAM
                    But, with billion threads in Linux VM, speed increases up to 8 threats, then it drops again.

                    This is with 1 and 16 threads:

                    https://nextcloud.openit.hr/s/BYGK2yjQziEMKww

                    also, this:
                    https://nextcloud.openit.hr/s/CptZpTt4jbWcRPX
                    is cpu load on host during 2 VM linux doing 16 thread iperf (with cumulative speed of pathetic 4 Gbit/sec).
                    It seems way to high for this kind of job?

                    J 1 Reply Last reply Reply Quote 0
                    • DanpD Offline
                      Danp Pro Support Team @olivierlambert
                      last edited by

                      @olivierlambert said in Epyc VM to VM networking slow:

                      Or maybe AMD CPU are a lot slower with memcpy()? πŸ€”

                      Has anyone reviewed this issue? Is there a way to test with a newer version of glibc?

                      J 1 Reply Last reply Reply Quote 0
                      • J Offline
                        JamesG @nicols
                        last edited by

                        @nicols give me your VM specs and I'll run the exact same tests. vCPU, RAM, anything else relevant.

                        N 1 Reply Last reply Reply Quote 0
                        • J Offline
                          JamesG @Danp
                          last edited by

                          @Danp That's interesting...

                          N 1 Reply Last reply Reply Quote 0
                          • N Offline
                            nicols @JamesG
                            last edited by

                            @JamesG said in Epyc VM to VM networking slow:

                            @nicols give me your VM specs and I'll run the exact same tests. vCPU, RAM, anything else relevant.

                            Debian 12: 16 VCPU, 2GB RAM
                            Windows 10 pro: 16 VCPU, 8GB RAM, citrix vmtols 9.3.1

                            On Linux Debian there is no much difference between 8 and 16 VCPU
                            On Windows 10, 8 VCPU: 16 Gbit/sec, 16 VCPU: 21 Gbit/sec

                            1 Reply Last reply Reply Quote 0
                            • N Offline
                              nicols @JamesG
                              last edited by nicols

                              @JamesG said in Epyc VM to VM networking slow:

                              @Danp That's interesting...

                              Yes, it is, but as i wrote earlier, i get full 21 Gbps Linux VM to VM on Proxmox/KVM (on exact same host, same BIOS settings), so i think it must be some problem on relation Epyc - Xen hypervisor....

                              J DanpD 2 Replies Last reply Reply Quote 0
                              • J Offline
                                JamesG @nicols
                                last edited by

                                @nicols Agreed. I'm pretty sure this is a Xen/Epyc issue.

                                This evening I'll build a couple of VM's to your config, run iperf, and report back the results.

                                J 1 Reply Last reply Reply Quote 0
                                • DanpD Offline
                                  Danp Pro Support Team @nicols
                                  last edited by

                                  @nicols said in Epyc VM to VM networking slow:

                                  i get full 21 Gbps Linux VM to VM on Proxmox/KVM

                                  If glibc is the source of the issue, then a likely explanation for your results is that Proxmox/KVM are using an updated version of this library where the patch has been applied.

                                  @olivierlambert Do you know if anyone on your team has looked into this?

                                  1 Reply Last reply Reply Quote 0
                                  • olivierlambertO Offline
                                    olivierlambert Vates πŸͺ Co-Founder CEO
                                    last edited by

                                    We are very very busy ATM.

                                    Also, about comparing to KVM doesn't make sense at all: there's no such network/disk isolation in KVM, so you can do zero copy, which will yield to much better performances (at the price of the thin isolation).

                                    First, we should compare between 2x fully patched systems (one Intel one AMD) a similar config, we could have a baseline and understands why AMD is a lot slower.

                                    N 1 Reply Last reply Reply Quote 0
                                    • olivierlambertO Offline
                                      olivierlambert Vates πŸͺ Co-Founder CEO
                                      last edited by

                                      Adding @dthenot in the loop in case it rings a bell.

                                      1 Reply Last reply Reply Quote 0
                                      • J Offline
                                        JamesG @JamesG
                                        last edited by

                                        The past couple of days have been pretty nuts, but I've dabbled with testing this and in my configuration with XCP-ng 8.3 with all currently released patches, I top out at 15Gb/s with 8 threads on Win 10. Going further to 16 threads or beyond doesn't really improve things.

                                        Killing core boost, SMT, and setting deterministic performance in BIOS added nearly 2Gb/s on single-threaded iperf.

                                        When running the iperf and watching htop on the XCP-ng server, I see nearly all cores running at 15-20% for the duration of the transfer. That seems excessive.

                                        Iperf on the E3-1230v2...Single thread, 9.27Gbs. Neglibile improvement for more threads. Surprisingly, a similar hit on CPU performance. Not as bad though. 10Gbps traffic hits about 10% or so. Definitely not as bad as on the Epyc system.

                                        I'll do more thorough testing tomorrow.

                                        ForzaF 1 Reply Last reply Reply Quote 0
                                        • ForzaF Offline
                                          Forza @JamesG
                                          last edited by

                                          I've found that iperf isnt super great at scaling it's performance, which might be a small factor here.

                                          I too have similar performance figures VM<->VM on a AMD EPYC 7402P 24-Core server. About 6-8Gbit/s.

                                          1 Reply Last reply Reply Quote 0
                                          • N Offline
                                            nicols
                                            last edited by

                                            Today, i got my hands on HPE ProLiant DL325 Gen10 server with Epyc 7502 32 core (64 threads) CPU. I have installed XCP-ng 8.2.1, and applied all pathes wth yum update. Installed 2 Debian and 2 Windows 10 VMs. Results are very similar:

                                            Linux to Linux VM on single host: 4 Gbit/sec on single thread, max 6 Gbit/sec on mulčtiple threads.
                                            I have tried various amountss of VCPU (2,4,8.12,16) and various combinations of iperf threads.

                                            Windws to Windows VM: 3.5 Gbit/sec on single thread, and 18 Gbit/sec um multiple threads.

                                            All this was with default bios settings, just changed to legacy boot.
                                            Wet performance tuning in bios (c states and other settings), i believe i can get 10-15% more, i will try that tommorow.

                                            So, i think this confirms that this is not Supermicro related problem, but something on relation Xen (hypervisor?) <-> AMD CPU.

                                            N 1 Reply Last reply Reply Quote 0
                                            • First post
                                              Last post