XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    Epyc VM to VM networking slow

    Scheduled Pinned Locked Moved Compute
    206 Posts 23 Posters 101.4k Views 26 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • P Offline
      probain @olivierlambert
      last edited by probain

      @olivierlambert
      I wasn't aware. Thanks! Downloading for doing a test, right away

      Test done:

      				Run1	Run2	Run3
      Sender:   Debian10 kernel 4.19	4.81Gb	4.81Gb	4.83Gb
      Reveiver: Debian10 kernel 4.19
      
      Sender:   Debian10 kernel 5.10	5.13Gb	5.02Gb	5.12Gb
      Reveiver: Debian10 kernel 4.19
      
      Sender:   Debian10 kernel 5.10	4.98Gb	5.02Gb	4.97Gb
      Reveiver: Debian10 kernel 5.10
      

      sender runs 'iperf -c <IP-to-receiver> -t 60'

      Kernel 4.19 = 4.19.0-6-amd64
      Kernel 5.10 = 5.10.0-0.deb10.24-amd64

      CPU 4 cores (AMD EPYC 7302P)
      RAM 4GB

      Created from XOA-hub

      1 Reply Last reply Reply Quote 0
      • olivierlambertO Offline
        olivierlambert Vates 🪐 Co-Founder CEO
        last edited by olivierlambert

        Thanks @probain , now can you try iperf -s in the Dom0 and iperf -c <IP dom0> in the Debian guest?

        P 1 Reply Last reply Reply Quote 0
        • P Offline
          probain @olivierlambert
          last edited by

          @olivierlambert
          vm -> dom0 results in "no route to host": firewall?

          Results will be shown for dom0 -> vm. Listed by each kernel installed on vm.

          Just as earlier. VM is installed via XOA Hub, with 4 CPU and 4GB RAM. Host CPU running on AMD EPYC 7302P.

          VM kernel ver.	Run1	Run2	Run3
          kernel 4.19.0	8.47Gb	8.82Gb	8.43Gb
          kernel 5.10.0	7.12Gb	7.07Gb	7.11Gb
          
          1 Reply Last reply Reply Quote 0
          • olivierlambertO Offline
            olivierlambert Vates 🪐 Co-Founder CEO
            last edited by

            yes disable the fw first (only in a testing lab obviously) with iptables -F

            P 1 Reply Last reply Reply Quote 0
            • P Offline
              probain @olivierlambert
              last edited by probain

              @olivierlambert how do I restore the iptables again afterwards? Other than reboot ofc 😋

              Update: Tests done

              vm -> dom0
              
              		Run1	Run2	Run3
              kernel 4.19.0	5.84Gb	5.77Gb	5.85Gb
              kernel 5.10.0	1.25Gb	1.26Gb	1.28
              

              Specs are just as previous post.

              G 1 Reply Last reply Reply Quote 0
              • olivierlambertO Offline
                olivierlambert Vates 🪐 Co-Founder CEO
                last edited by

                Thanks so at least it confirms something we are also spotting in here. We found the exact commit.

                L 1 Reply Last reply Reply Quote 1
                • G Offline
                  G-Ork
                  last edited by

                  Here are the opterons with dropped firewall:

                  source destination OS Kernel Speed Average
                  vm dom debian 10 4.19.0-6-amd64 6.57 Gbits/sec
                  dom vm debian 10 4.19.0-6-amd64 1.79 Gbits/sec
                  vm dom truenas 6.6.20 2.01 Gbits/sec
                  dom vm truenas 6.6.20 1.82 Gbits/sec
                  host vm debian 10 4.19.0-6-amd64 5.32 Gbits/sec
                  host vm truenas 6.6.20 1.92 Gbits/sec
                  host dom debian 4.19.0+1 8.97 Gbits/sec
                  1 Reply Last reply Reply Quote 0
                  • G Offline
                    G-Ork @probain
                    last edited by

                    @probain said in Epyc VM to VM networking slow:

                    I restore the iptables again afterwards? Other than reboot

                    this worked for me

                    action command
                    save iptables-save > firewall.conf
                    flush iptables -F
                    restore cat firewall.conf | iptables-restore
                    1 Reply Last reply Reply Quote 1
                    • P probain referenced this topic on
                    • S Offline
                      sluflyer06
                      last edited by sluflyer06

                      Here's a little test I just ran between VM's over SMB on my Threadripper 7960x build on a Supermicro H13SRA-TF motherboard, def not too bad, these VM's are on different SR's.
                      dada79bd-02ac-4045-81a8-ab424d9d320f-image.png

                      S 1 Reply Last reply Reply Quote 0
                      • S Offline
                        Seneram @sluflyer06
                        last edited by

                        @sluflyer06 This test does not say anything other than that you have a 10G nic and we already knew that the limit for latest gen amd's are just above 10G. If you insert an 25 G nic then you can only use half of that capacity likely and for some of us that are using this in actual datacenters that is a pretty critical issue.even more so when it seems the limit is shared per host so that 4 VMs running on same host if the limit is 12gbit means you get 3 gbit per vm. And when you realize lots of us may have 20-40 VMs per server that all use a decent portion of network it is suddenly really scary whenn you realize that is 300-600 mbit per server.

                        Or even worse when you realize that for those that have earlier gens of amd platform where the limit is 2-4 gbit ish.. now you re looking at 100-200 mbit per vm which suddenly is not very unobtainable for even a smaller provider during peak use times.

                        It is great that the issue is not triggered for you as your bottleneck is elsewhere, but it is a very serious issue for several of us.

                        With that said, Vates is handling it as good as anyone could request and i thank them for the attention given and the dedication to solving it.

                        It is a NASTY bug and very situational for it to have been discovered.

                        S 1 Reply Last reply Reply Quote 0
                        • S Offline
                          sluflyer06 @Seneram
                          last edited by

                          @Seneram ah well excuse my ignorance then, I thought people said the limits were much lower. I can see what you are saying and the big issue with that.

                          1 Reply Last reply Reply Quote 0
                          • L Offline
                            LennertvdBerg @olivierlambert
                            last edited by

                            @olivierlambert is it already known in which update/release this problem will be solved?

                            S 1 Reply Last reply Reply Quote 0
                            • S Offline
                              Seneram @LennertvdBerg
                              last edited by

                              @LennertvdBerg they are still trying to figure this one out.

                              And an estimated full fix is not in sight just yet from what i know. Atleast i havent been informed in my ticket with them about this. But i do know they are still working very hard on this.

                              1 Reply Last reply Reply Quote 0
                              • olivierlambertO Offline
                                olivierlambert Vates 🪐 Co-Founder CEO
                                last edited by

                                That's correct, it's a long investigation that is very likely related to the AMD micro architecture itself. It's not a trivial thing to fix. We've seen various improvements here and there, but nothing big so far. We still work on it, and also, as Vates grows, we can have more resources to handle the issue.

                                1 Reply Last reply Reply Quote 0
                                • T Offline
                                  timewasted
                                  last edited by

                                  Just out of curiosity, how is everyone that's experiencing this issue currently dealing with it while the issue is being investigated? I was sort of naively hoping that it would get sorted by the 8.3 release, but now that those hopes have been dashed I'm trying to see what options I have to work around the issue.

                                  S M 2 Replies Last reply Reply Quote 0
                                  • S Offline
                                    Seneram @timewasted
                                    last edited by

                                    @timewasted

                                    Spread network heavy VMs across the cluster as it is a per physical host limit and also changed our design a bit where we intended to have all levels of routers virtual we split out the core routers from that and they are physical.

                                    1 Reply Last reply Reply Quote 0
                                    • M Online
                                      manilx @timewasted
                                      last edited by manilx

                                      @timewasted Thankfully our VM's are fine with a 1GB connection.
                                      The exception being XOA itself during backups. We're getting max of 80-90MB/s speed. This is all on 10GB connections.
                                      When I at my homelab with a measily Protectli VP6670 (management 2,5MB/s connection) I can fully saturate the network port with 200-300MB/s...... I'm sure it's because of this EPYC issue that we don't get more speed at the production site.

                                      S 1 Reply Last reply Reply Quote 0
                                      • S Offline
                                        sluflyer06 @manilx
                                        last edited by

                                        @manilx is your nas virtualized on the host, or a separate physical box?

                                        M 1 Reply Last reply Reply Quote 0
                                        • M Online
                                          manilx @sluflyer06
                                          last edited by manilx

                                          @sluflyer06 Slow Business: Our backup NAS'es: Synology DS3622XS and QNAP h1288X both connected via 10G to 10G switch, both HP EPYC hosts also connected to same switch via 10G.

                                          Fast Homelab: backup NAS also the same QNAP via 10G and 2 Protectli VP6670 hosts connected on management interface via 2,5G

                                          S 1 Reply Last reply Reply Quote 0
                                          • S Offline
                                            Seneram @manilx
                                            last edited by

                                            @manilx i dont think it is directly related due to just how low it is. But we also see similar "Lower than expected speeds" on backups

                                            M 1 Reply Last reply Reply Quote 0
                                            • First post
                                              Last post