Epyc VM to VM networking slow
-
@olivierlambert
vm -> dom0 results in "no route to host": firewall?Results will be shown for dom0 -> vm. Listed by each kernel installed on vm.
Just as earlier. VM is installed via XOA Hub, with 4 CPU and 4GB RAM. Host CPU running on AMD EPYC 7302P.
VM kernel ver. Run1 Run2 Run3 kernel 4.19.0 8.47Gb 8.82Gb 8.43Gb kernel 5.10.0 7.12Gb 7.07Gb 7.11Gb
-
yes disable the fw first (only in a testing lab obviously) with
iptables -F
-
@olivierlambert how do I restore the iptables again afterwards? Other than reboot ofc
Update: Tests done
vm -> dom0 Run1 Run2 Run3 kernel 4.19.0 5.84Gb 5.77Gb 5.85Gb kernel 5.10.0 1.25Gb 1.26Gb 1.28
Specs are just as previous post.
-
Thanks so at least it confirms something we are also spotting in here. We found the exact commit.
-
Here are the opterons with dropped firewall:
source destination OS Kernel Speed Average vm dom debian 10 4.19.0-6-amd64 6.57 Gbits/sec dom vm debian 10 4.19.0-6-amd64 1.79 Gbits/sec vm dom truenas 6.6.20 2.01 Gbits/sec dom vm truenas 6.6.20 1.82 Gbits/sec host vm debian 10 4.19.0-6-amd64 5.32 Gbits/sec host vm truenas 6.6.20 1.92 Gbits/sec host dom debian 4.19.0+1 8.97 Gbits/sec -
@probain said in Epyc VM to VM networking slow:
I restore the iptables again afterwards? Other than reboot
this worked for me
action command save iptables-save > firewall.conf flush iptables -F restore cat firewall.conf | iptables-restore -
-
Here's a little test I just ran between VM's over SMB on my Threadripper 7960x build on a Supermicro H13SRA-TF motherboard, def not too bad, these VM's are on different SR's.
-
@sluflyer06 This test does not say anything other than that you have a 10G nic and we already knew that the limit for latest gen amd's are just above 10G. If you insert an 25 G nic then you can only use half of that capacity likely and for some of us that are using this in actual datacenters that is a pretty critical issue.even more so when it seems the limit is shared per host so that 4 VMs running on same host if the limit is 12gbit means you get 3 gbit per vm. And when you realize lots of us may have 20-40 VMs per server that all use a decent portion of network it is suddenly really scary whenn you realize that is 300-600 mbit per server.
Or even worse when you realize that for those that have earlier gens of amd platform where the limit is 2-4 gbit ish.. now you re looking at 100-200 mbit per vm which suddenly is not very unobtainable for even a smaller provider during peak use times.
It is great that the issue is not triggered for you as your bottleneck is elsewhere, but it is a very serious issue for several of us.
With that said, Vates is handling it as good as anyone could request and i thank them for the attention given and the dedication to solving it.
It is a NASTY bug and very situational for it to have been discovered.
-
@Seneram ah well excuse my ignorance then, I thought people said the limits were much lower. I can see what you are saying and the big issue with that.
-
@olivierlambert is it already known in which update/release this problem will be solved?