Epyc VM to VM networking slow
-
@olivierlambert
I wasn't aware. Thanks! Downloading for doing a test, right awayTest done:
Run1 Run2 Run3 Sender: Debian10 kernel 4.19 4.81Gb 4.81Gb 4.83Gb Reveiver: Debian10 kernel 4.19 Sender: Debian10 kernel 5.10 5.13Gb 5.02Gb 5.12Gb Reveiver: Debian10 kernel 4.19 Sender: Debian10 kernel 5.10 4.98Gb 5.02Gb 4.97Gb Reveiver: Debian10 kernel 5.10
sender runs 'iperf -c <IP-to-receiver> -t 60'
Kernel 4.19 = 4.19.0-6-amd64
Kernel 5.10 = 5.10.0-0.deb10.24-amd64CPU 4 cores (AMD EPYC 7302P)
RAM 4GBCreated from XOA-hub
-
Thanks @probain , now can you try
iperf -s
in the Dom0 andiperf -c <IP dom0>
in the Debian guest? -
@olivierlambert
vm -> dom0 results in "no route to host": firewall?Results will be shown for dom0 -> vm. Listed by each kernel installed on vm.
Just as earlier. VM is installed via XOA Hub, with 4 CPU and 4GB RAM. Host CPU running on AMD EPYC 7302P.
VM kernel ver. Run1 Run2 Run3 kernel 4.19.0 8.47Gb 8.82Gb 8.43Gb kernel 5.10.0 7.12Gb 7.07Gb 7.11Gb
-
yes disable the fw first (only in a testing lab obviously) with
iptables -F
-
@olivierlambert how do I restore the iptables again afterwards? Other than reboot ofc
Update: Tests done
vm -> dom0 Run1 Run2 Run3 kernel 4.19.0 5.84Gb 5.77Gb 5.85Gb kernel 5.10.0 1.25Gb 1.26Gb 1.28
Specs are just as previous post.
-
Thanks so at least it confirms something we are also spotting in here. We found the exact commit.
-
Here are the opterons with dropped firewall:
source destination OS Kernel Speed Average vm dom debian 10 4.19.0-6-amd64 6.57 Gbits/sec dom vm debian 10 4.19.0-6-amd64 1.79 Gbits/sec vm dom truenas 6.6.20 2.01 Gbits/sec dom vm truenas 6.6.20 1.82 Gbits/sec host vm debian 10 4.19.0-6-amd64 5.32 Gbits/sec host vm truenas 6.6.20 1.92 Gbits/sec host dom debian 4.19.0+1 8.97 Gbits/sec -
@probain said in Epyc VM to VM networking slow:
I restore the iptables again afterwards? Other than reboot
this worked for me
action command save iptables-save > firewall.conf flush iptables -F restore cat firewall.conf | iptables-restore -
-
Here's a little test I just ran between VM's over SMB on my Threadripper 7960x build on a Supermicro H13SRA-TF motherboard, def not too bad, these VM's are on different SR's.
-
@sluflyer06 This test does not say anything other than that you have a 10G nic and we already knew that the limit for latest gen amd's are just above 10G. If you insert an 25 G nic then you can only use half of that capacity likely and for some of us that are using this in actual datacenters that is a pretty critical issue.even more so when it seems the limit is shared per host so that 4 VMs running on same host if the limit is 12gbit means you get 3 gbit per vm. And when you realize lots of us may have 20-40 VMs per server that all use a decent portion of network it is suddenly really scary whenn you realize that is 300-600 mbit per server.
Or even worse when you realize that for those that have earlier gens of amd platform where the limit is 2-4 gbit ish.. now you re looking at 100-200 mbit per vm which suddenly is not very unobtainable for even a smaller provider during peak use times.
It is great that the issue is not triggered for you as your bottleneck is elsewhere, but it is a very serious issue for several of us.
With that said, Vates is handling it as good as anyone could request and i thank them for the attention given and the dedication to solving it.
It is a NASTY bug and very situational for it to have been discovered.
-
@Seneram ah well excuse my ignorance then, I thought people said the limits were much lower. I can see what you are saying and the big issue with that.
-
@olivierlambert is it already known in which update/release this problem will be solved?
-
@LennertvdBerg they are still trying to figure this one out.
And an estimated full fix is not in sight just yet from what i know. Atleast i havent been informed in my ticket with them about this. But i do know they are still working very hard on this.
-
That's correct, it's a long investigation that is very likely related to the AMD micro architecture itself. It's not a trivial thing to fix. We've seen various improvements here and there, but nothing big so far. We still work on it, and also, as Vates grows, we can have more resources to handle the issue.
-
Just out of curiosity, how is everyone that's experiencing this issue currently dealing with it while the issue is being investigated? I was sort of naively hoping that it would get sorted by the 8.3 release, but now that those hopes have been dashed I'm trying to see what options I have to work around the issue.
-
Spread network heavy VMs across the cluster as it is a per physical host limit and also changed our design a bit where we intended to have all levels of routers virtual we split out the core routers from that and they are physical.
-
@timewasted Thankfully our VM's are fine with a 1GB connection.
The exception being XOA itself during backups. We're getting max of 80-90MB/s speed. This is all on 10GB connections.
When I at my homelab with a measily Protectli VP6670 (management 2,5MB/s connection) I can fully saturate the network port with 200-300MB/s...... I'm sure it's because of this EPYC issue that we don't get more speed at the production site. -
@manilx is your nas virtualized on the host, or a separate physical box?
-
@sluflyer06 Slow Business: Our backup NAS'es: Synology DS3622XS and QNAP h1288X both connected via 10G to 10G switch, both HP EPYC hosts also connected to same switch via 10G.
Fast Homelab: backup NAS also the same QNAP via 10G and 2 Protectli VP6670 hosts connected on management interface via 2,5G
-
@manilx i dont think it is directly related due to just how low it is. But we also see similar "Lower than expected speeds" on backups