Epyc VM to VM networking slow
-
@olivierlambert
vm -> dom0 results in "no route to host": firewall?Results will be shown for dom0 -> vm. Listed by each kernel installed on vm.
Just as earlier. VM is installed via XOA Hub, with 4 CPU and 4GB RAM. Host CPU running on AMD EPYC 7302P.
VM kernel ver. Run1 Run2 Run3 kernel 4.19.0 8.47Gb 8.82Gb 8.43Gb kernel 5.10.0 7.12Gb 7.07Gb 7.11Gb
-
yes disable the fw first (only in a testing lab obviously) with
iptables -F
-
@olivierlambert how do I restore the iptables again afterwards? Other than reboot ofc
Update: Tests done
vm -> dom0 Run1 Run2 Run3 kernel 4.19.0 5.84Gb 5.77Gb 5.85Gb kernel 5.10.0 1.25Gb 1.26Gb 1.28
Specs are just as previous post.
-
Thanks so at least it confirms something we are also spotting in here. We found the exact commit.
-
Here are the opterons with dropped firewall:
source destination OS Kernel Speed Average vm dom debian 10 4.19.0-6-amd64 6.57 Gbits/sec dom vm debian 10 4.19.0-6-amd64 1.79 Gbits/sec vm dom truenas 6.6.20 2.01 Gbits/sec dom vm truenas 6.6.20 1.82 Gbits/sec host vm debian 10 4.19.0-6-amd64 5.32 Gbits/sec host vm truenas 6.6.20 1.92 Gbits/sec host dom debian 4.19.0+1 8.97 Gbits/sec -
@probain said in Epyc VM to VM networking slow:
I restore the iptables again afterwards? Other than reboot
this worked for me
action command save iptables-save > firewall.conf flush iptables -F restore cat firewall.conf | iptables-restore -
-
Here's a little test I just ran between VM's over SMB on my Threadripper 7960x build on a Supermicro H13SRA-TF motherboard, def not too bad, these VM's are on different SR's.
-
@sluflyer06 This test does not say anything other than that you have a 10G nic and we already knew that the limit for latest gen amd's are just above 10G. If you insert an 25 G nic then you can only use half of that capacity likely and for some of us that are using this in actual datacenters that is a pretty critical issue.even more so when it seems the limit is shared per host so that 4 VMs running on same host if the limit is 12gbit means you get 3 gbit per vm. And when you realize lots of us may have 20-40 VMs per server that all use a decent portion of network it is suddenly really scary whenn you realize that is 300-600 mbit per server.
Or even worse when you realize that for those that have earlier gens of amd platform where the limit is 2-4 gbit ish.. now you re looking at 100-200 mbit per vm which suddenly is not very unobtainable for even a smaller provider during peak use times.
It is great that the issue is not triggered for you as your bottleneck is elsewhere, but it is a very serious issue for several of us.
With that said, Vates is handling it as good as anyone could request and i thank them for the attention given and the dedication to solving it.
It is a NASTY bug and very situational for it to have been discovered.
-
@Seneram ah well excuse my ignorance then, I thought people said the limits were much lower. I can see what you are saying and the big issue with that.
-
@olivierlambert is it already known in which update/release this problem will be solved?
-
@LennertvdBerg they are still trying to figure this one out.
And an estimated full fix is not in sight just yet from what i know. Atleast i havent been informed in my ticket with them about this. But i do know they are still working very hard on this.
-
That's correct, it's a long investigation that is very likely related to the AMD micro architecture itself. It's not a trivial thing to fix. We've seen various improvements here and there, but nothing big so far. We still work on it, and also, as Vates grows, we can have more resources to handle the issue.
-
Just out of curiosity, how is everyone that's experiencing this issue currently dealing with it while the issue is being investigated? I was sort of naively hoping that it would get sorted by the 8.3 release, but now that those hopes have been dashed I'm trying to see what options I have to work around the issue.
-
Spread network heavy VMs across the cluster as it is a per physical host limit and also changed our design a bit where we intended to have all levels of routers virtual we split out the core routers from that and they are physical.
-
@timewasted Thankfully our VM's are fine with a 1GB connection.
The exception being XOA itself during backups. We're getting max of 80-90MB/s speed. This is all on 10GB connections.
When I at my homelab with a measily Protectli VP6670 (management 2,5MB/s connection) I can fully saturate the network port with 200-300MB/s...... I'm sure it's because of this EPYC issue that we don't get more speed at the production site. -
@manilx is your nas virtualized on the host, or a separate physical box?
-
@sluflyer06 Slow Business: Our backup NAS'es: Synology DS3622XS and QNAP h1288X both connected via 10G to 10G switch, both HP EPYC hosts also connected to same switch via 10G.
Fast Homelab: backup NAS also the same QNAP via 10G and 2 Protectli VP6670 hosts connected on management interface via 2,5G
-
@manilx i dont think it is directly related due to just how low it is. But we also see similar "Lower than expected speeds" on backups
-
@Seneram I don't know what the issue is but the only difference is the host/CPU.....
On a beast of a host I get 1/3 the backup speed as on a mini-pc. This is with XOA/XO VM's on the hosts themselves. -
@manilx Oh i absolutely agree that it is an issue... Maybe i could see that due to backups being handled by the XOA VM then whatever is causing our slowdowns for network between VMs (And out of VMs to external) might impact the networking and/or the process of the XOA backup process too.
What do you think @olivierlambert is these perhaps directly related? It sure would explain our very low backup speeds aswell that we see, (we have fully loaded synology FS2500's (all flash) with write intensive SSDs in.)