Epyc VM to VM networking slow
-
@JamesG said in Epyc VM to VM networking slow:
@olivierlambert With a billion threads.
nope, Win 10 VM with 4 or 8 VCPU and 8GB RAM
But, with billion threads in Linux VM, speed increases up to 8 threats, then it drops again. -
@olivierlambert For single threaded iperf....Yes. Our speeds match 100%. Which is half the transfer rate of a single threaded iperf on 12 year-old Xeon E3 hardware.
I understand that we've had lots of security issues in the past decade and several steps have been taken to protect and isolate the memory inside all virtualization platforms. When I first built my E3-1230 Xeon system for homelab, VM to VM iperfs were like 20Gb/s. Nowadays that's significantly slowed down.
Anyway...I just find it hard to believe that with as superior a computing platform as Epyc is, that the single-threaded iperf is so much slwoer than 12-year-old entry level Intel CPUs.
Maybe I should load VMWare on this system and see how it does and report back. Same hardware, but different hypervisor, and compare notes.
-
@nicols said in Epyc VM to VM networking slow:
@JamesG said in Epyc VM to VM networking slow:
@olivierlambert With a billion threads.
nope, Win 10 VM with 4 or 8 VCPU and 8GB RAM
But, with billion threads in Linux VM, speed increases up to 8 threats, then it drops again.This is with 1 and 16 threads:
-
@nicols said in Epyc VM to VM networking slow:
@nicols said in Epyc VM to VM networking slow:
@JamesG said in Epyc VM to VM networking slow:
@olivierlambert With a billion threads.
nope, Win 10 VM with 4 or 8 VCPU and 8GB RAM
But, with billion threads in Linux VM, speed increases up to 8 threats, then it drops again.This is with 1 and 16 threads:
also, this:
https://nextcloud.openit.hr/s/CptZpTt4jbWcRPX
is cpu load on host during 2 VM linux doing 16 thread iperf (with cumulative speed of pathetic 4 Gbit/sec).
It seems way to high for this kind of job? -
@olivierlambert said in Epyc VM to VM networking slow:
Or maybe AMD CPU are a lot slower with memcpy()?
Has anyone reviewed this issue? Is there a way to test with a newer version of
glibc
? -
@nicols give me your VM specs and I'll run the exact same tests. vCPU, RAM, anything else relevant.
-
@Danp That's interesting...
-
@JamesG said in Epyc VM to VM networking slow:
@nicols give me your VM specs and I'll run the exact same tests. vCPU, RAM, anything else relevant.
Debian 12: 16 VCPU, 2GB RAM
Windows 10 pro: 16 VCPU, 8GB RAM, citrix vmtols 9.3.1On Linux Debian there is no much difference between 8 and 16 VCPU
On Windows 10, 8 VCPU: 16 Gbit/sec, 16 VCPU: 21 Gbit/sec -
@JamesG said in Epyc VM to VM networking slow:
@Danp That's interesting...
Yes, it is, but as i wrote earlier, i get full 21 Gbps Linux VM to VM on Proxmox/KVM (on exact same host, same BIOS settings), so i think it must be some problem on relation Epyc - Xen hypervisor....
-
@nicols Agreed. I'm pretty sure this is a Xen/Epyc issue.
This evening I'll build a couple of VM's to your config, run iperf, and report back the results.
-
@nicols said in Epyc VM to VM networking slow:
i get full 21 Gbps Linux VM to VM on Proxmox/KVM
If
glibc
is the source of the issue, then a likely explanation for your results is that Proxmox/KVM are using an updated version of this library where the patch has been applied.@olivierlambert Do you know if anyone on your team has looked into this?
-
We are very very busy ATM.
Also, about comparing to KVM doesn't make sense at all: there's no such network/disk isolation in KVM, so you can do zero copy, which will yield to much better performances (at the price of the thin isolation).
First, we should compare between 2x fully patched systems (one Intel one AMD) a similar config, we could have a baseline and understands why AMD is a lot slower.
-
Adding @dthenot in the loop in case it rings a bell.
-
The past couple of days have been pretty nuts, but I've dabbled with testing this and in my configuration with XCP-ng 8.3 with all currently released patches, I top out at 15Gb/s with 8 threads on Win 10. Going further to 16 threads or beyond doesn't really improve things.
Killing core boost, SMT, and setting deterministic performance in BIOS added nearly 2Gb/s on single-threaded iperf.
When running the iperf and watching htop on the XCP-ng server, I see nearly all cores running at 15-20% for the duration of the transfer. That seems excessive.
Iperf on the E3-1230v2...Single thread, 9.27Gbs. Neglibile improvement for more threads. Surprisingly, a similar hit on CPU performance. Not as bad though. 10Gbps traffic hits about 10% or so. Definitely not as bad as on the Epyc system.
I'll do more thorough testing tomorrow.
-
I've found that iperf isnt super great at scaling it's performance, which might be a small factor here.
I too have similar performance figures VM<->VM on a AMD EPYC 7402P 24-Core server. About 6-8Gbit/s.
-
Today, i got my hands on HPE ProLiant DL325 Gen10 server with Epyc 7502 32 core (64 threads) CPU. I have installed XCP-ng 8.2.1, and applied all pathes wth yum update. Installed 2 Debian and 2 Windows 10 VMs. Results are very similar:
Linux to Linux VM on single host: 4 Gbit/sec on single thread, max 6 Gbit/sec on mulčtiple threads.
I have tried various amountss of VCPU (2,4,8.12,16) and various combinations of iperf threads.Windws to Windows VM: 3.5 Gbit/sec on single thread, and 18 Gbit/sec um multiple threads.
All this was with default bios settings, just changed to legacy boot.
Wet performance tuning in bios (c states and other settings), i believe i can get 10-15% more, i will try that tommorow.So, i think this confirms that this is not Supermicro related problem, but something on relation Xen (hypervisor?) <-> AMD CPU.
-
@olivierlambert said in Epyc VM to VM networking slow:
Also, about comparing to KVM doesn't make sense at all: there's no such network/disk isolation in KVM, so you can do zero copy, which will yield to much better performances (at the price of the thin isolation).
Yes, we are all aware of KVM / Xen differences, BUT: there is something important here to consider: I am getting similar result in Winsows VM to VM network traffic on Prox and XCP-ng. This proves that network/disk isolation on XCP-ng isn't slowing anything down.
Prox/KVM Linux VM to VM network speed is the same as with Windows VMs.
Problem is much slower network traffic on Linux VM to VM on single XCP-ng host.
-
That's exactly what I'd like to confirm with the community. If we can spot a different in Windows guests and Linux guests, it might be interesting to find why
-
@nicols said in Epyc VM to VM networking slow:
Today, i got my hands on HPE ProLiant DL325 Gen10 server with Epyc 7502 32 core (64 threads) CPU. I have installed XCP-ng 8.2.1, and applied all pathes wth yum update. Installed 2 Debian and 2 Windows 10 VMs. Results are very similar:
Linux to Linux VM on single host: 4 Gbit/sec on single thread, max 6 Gbit/sec on mulčtiple threads.
I have tried various amountss of VCPU (2,4,8.12,16) and various combinations of iperf threads.Windws to Windows VM: 3.5 Gbit/sec on single thread, and 18 Gbit/sec um multiple threads.
All this was with default bios settings, just changed to legacy boot.
Wet performance tuning in bios (c states and other settings), i believe i can get 10-15% more, i will try that tommorow.So, i think this confirms that this is not Supermicro related problem, but something on relation Xen (hypervisor?) <-> AMD CPU.
Same hardware, VmWare ESXi 8.0, Debian 12 VMs with 4 vCPU and 2GB RAM.
root@debian-on-vmwareto:~# iperf -c 10.33.65.159 ------------------------------------------------------------ Client connecting to 10.33.65.159, TCP port 5001 TCP window size: 16.0 KByte (default) ------------------------------------------------------------ [ 1] local 10.33.65.160 port 59124 connected with 10.33.65.159 port 5001 (icwnd/mss/irtt=14/1448/164) [ ID] Interval Transfer Bandwidth [ 1] 0.0000-10.0094 sec 29.0 GBytes 24.9 Gbits/sec
with more threads:
root@debian-on-vmwareto:~# iperf -c 10.33.65.159 -P4 ------------------------------------------------------------ Client connecting to 10.33.65.159, TCP port 5001 TCP window size: 16.0 KByte (default) ------------------------------------------------------------ [ 3] local 10.33.65.160 port 46444 connected with 10.33.65.159 port 5001 (icwnd/mss/irtt=14/1448/107) [ 1] local 10.33.65.160 port 46446 connected with 10.33.65.159 port 5001 (icwnd/mss/irtt=14/1448/130) [ 2] local 10.33.65.160 port 46442 connected with 10.33.65.159 port 5001 (icwnd/mss/irtt=14/1448/136) [ 4] local 10.33.65.160 port 46468 connected with 10.33.65.159 port 5001 (icwnd/mss/irtt=14/1448/74) [ ID] Interval Transfer Bandwidth [ 3] 0.0000-10.0142 sec 7.59 GBytes 6.51 Gbits/sec [ 1] 0.0000-10.0142 sec 15.5 GBytes 13.3 Gbits/sec [ 4] 0.0000-10.0136 sec 7.89 GBytes 6.77 Gbits/sec [ 2] 0.0000-10.0142 sec 14.7 GBytes 12.6 Gbits/sec [SUM] 0.0000-10.0018 sec 45.6 GBytes 39.2 Gbits/sec
Will try with with windows VMs next.
I know it is apples and oranges, but i would accept speed difference of abbout 10-20%.
Here, we are talking about more tahn 600% difference. -
Those are really interesting results.
How can we as a community best help find the root cause/debug this issue?
For example, is it an ovswitch issue or perhaps something to do with excessive context switches?