Epyc VM to VM networking slow
-
@olivierlambert said in Epyc VM to VM networking slow:
I wonder if you have a CPU that is boosting correctly, because even in the worst case scenario (no NIC offload), you should have something closer to 10G than 2G
Also, can you check if you are all firmware up to date?
Finally, can you try with many iperf3 threads, like 14 or 16 and check the result?
All firmwares are up to date.
I tried many iperf3 threads combination, including 14 and 16, no any speed gain. I Have tried running VMs with 1,2,4,8 and 16 VCPUs,
and various amounts of RAM.There is no NIC involved, i am testing with single-server private network.
Nevertheless, result is same on "real" network or single-server private network.
In both cases, like you said earlier, there is no actual NIC (drivers, offload ...) involved, because this two VMs are on same physical host.I am a bit lost, because i tried all usual and common sense stuff in BIOS... I think i didn't try disabling SMT, so i will try it now.
If you (or anybody reading this) have some EPYC server (single or dual socket), please test network speed between two VMs running on single physical host. I would like to see how my test compare with MBOs from another manufacturers.
-
I have a pretty large diff when having far more process iperf3 process, it's weird you don't (eg I can reach 16G+ vs 8G on the same host with 14 iperf threads vs 1)
-
@olivierlambert said in Epyc VM to VM networking slow:
I have a pretty large diff when having far more process iperf3 process, it's weird you don't (eg I can reach 16G+ vs 8G on the same host with 14 iperf threads vs 1)
iperf2:
root@debilan-12-1:~# iperf -c 10.33.66.135 ... [ 1] 0.0000-10.0191 sec 3.25 GBytes 2.78 Gbits/sec
root@debilan-12-1:~# iperf -c 10.33.66.135 -P16 ... [SUM] 0.0000-10.0422 sec 5.19 GBytes 4.44 Gbits/sec
iperf3:
root@debilan-12-1:~# iperf3 -c 10.33.66.135 ... [ 5] 0.00-10.00 sec 3.91 GBytes 3.36 Gbits/sec 180 sender [ 5] 0.00-10.00 sec 3.91 GBytes 3.36 Gbits/sec receiver
root@debilan-12-1:~# iperf3 -c 10.33.66.135 -P16 ... [SUM] 0.00-10.01 sec 3.83 GBytes 3.29 Gbits/sec 0 sender [SUM] 0.00-10.01 sec 3.83 GBytes 3.29 Gbits/sec receiver
-
Try iperf3 with 2 threads
-
@olivierlambert said in Epyc VM to VM networking slow:
Try iperf3 with 2 threads
[SUM] 0.00-10.00 sec 4.78 GBytes 4.10 Gbits/sec 318 sender [SUM] 0.00-10.00 sec 4.78 GBytes 4.10 Gbits/sec receiver
-
In any case, what's going on: since you are sending packets that are not leaving the host, it's not using any NIC offload. And since you are using Xen to isolate between guest, it's up to basically
memcopy()
to deal with all the packets you send to the other guest.So you are limited by the capacity of your Dom0 to make many
memcopy()
per seconds. There's no fastpath in this situation.What could make it at only few GBits/s? Could be no turbo, or high latency with the RAM or many other aspects. Benchmarks I have on my host isn't that bad than yours though (especially it scales with more iperf3 processes in parallel).
I would check if the CPU is able to turbo (thermal limitations? RAM speed? latency?). It's hard to get a precise answer since there's many parameters.
-
@olivierlambert said in Epyc VM to VM networking slow:
In any case, what's going on: since you are sending packets that are not leaving the host, it's not using any NIC offload. And since you are using Xen to isolate between guest, it's up to basically
memcopy()
to deal with all the packets you send to the other guest.So you are limited by the capacity of your Dom0 to make many
memcopy()
per seconds. There's no fastpath in this situation.What could make it at only few GBits/s? Could be no turbo, or high latency with the RAM or many other aspects. Benchmarks I have on my host isn't that bad than yours though (especially it scales with more iperf3 processes in parallel).
I would check if the CPU is able to turbo (thermal limitations? RAM speed? latency?). It's hard to get a precise answer since there's many parameters.
On dom0, i have normal speed of 9.5 / 9.5 Gbits/sec to another physical server, via 10G switch.
-
Again, you need to understand: sending packets to another host is very different (since it's going to the NIC). So the Dom0 speed has nothing to do with what's happening behind the scene between regular guests on the same physical host.
-
@olivierlambert said in Epyc VM to VM networking slow:
I have a pretty large diff when having far more process iperf3 process, it's weird you don't (eg I can reach 16G+ vs 8G on the same host with 14 iperf threads vs 1)
is this 16gbit/sec between two VMs on same physical host?
if yes, which HW is it ? -
Yes, between 2x VMs on the same host. Ryzen 5 7600.
-
@olivierlambert said in Epyc VM to VM networking slow:
Yes, between 2x VMs on the same host. Ryzen 5 7600.
We did some more tests and BIOS tweaking.
We are getting max. 5 Gbps VM to VM trafic on Linux.
But, we also did tests with Windows OS, with multiple iperf threads we are able to achieve up to 18 Gbps VM to VM on same physical host.So, where is the catch?
Anybody here with EPYC server willing to do some tests?
Best Regards!
-
You mean 2x Windows VMs (which versions?) on the same EPYC host running XCP-ng, you got 18G with multiple threads (how many?) vs Linux in the same configuration (which kernel/distro? same VM cores & memory?), right?
-
@olivierlambert said in Epyc VM to VM networking slow:
You mean 2x Windows VMs (which versions?) on the same EPYC host running XCP-ng, you got 18G with multiple threads (how many?) vs Linux in the same configuration (which kernel/distro? same VM cores & memory?), right?
yes, 2 x win10 pro, on same host, 8-12 multiple threads vs debian 12, same cpu/memory configurations, also 8-12 multiple threads.
-
Do you have tools and PV drivers installed in Windows? Which version of the tools?
-
@olivierlambert said in Epyc VM to VM networking slow:
Do you have tools and PV drivers installed in Windows? Which version of the tools?
citrix vm tools 9.3.1 are instaleld
-
The different is really weird between Windows and Linux In theory, you should have a similar result on your Linux VMs vs Windows for the network speed.
What is the template you are using for your Debian VMs?
-
I cited this performance problem on Epyc early this year when I was building a cluster for a company. I couldn't get any VM to VM performance, so SR-IOV to the rescue which getting that to work was a mess.
I've got Supermicro H12SSL boards with 7302p processors. I don't have all the memory slots populated so memory throughput isn't as good as it could be, but it sounds like other people are having a similar experience to me.
Currently I'm on the latest 8.3 patched. NICs are Mellanox ConnextX4-LX (not that they matter in guest to guest traffic).
That said...My nearly 12 year-old Xeon E3-1230v2 server crushes the Epyc in guest to guest traffic. Quick test:
Xeon E3-1230v2 iperf3 with 4 threads (Debian guests):
[SUM] 0.00-10.00 sec 15.2 GBytes 13.1 Gbits/sec 0 sender
[SUM] 0.00-10.04 sec 15.2 GBytes 13.0 Gbits/sec receiverEpyc 7302p iperf3 with 4 threads (Debian guests):
[SUM] 0.00-10.00 sec 7.66 GBytes 6.58 Gbits/sec 870 sender
[SUM] 0.00-10.03 sec 7.65 GBytes 6.55 Gbits/sec receiverInteresting to note the retries on the Epyc transfer.
These two CPU's have similar single-thread performance and similar clock-rate. All the C and P states on the Epyc systems have been tweaked.
Guest to Guest traffic is seemingly really impaired compared to the old Xeons.
-
@JamesG can you test between 2x Windows guest? (and/or between 2x BSD guests)
It's maybe a weird Linux thing What's the kernel version used in your Debian?
-
@JamesG said in Epyc VM to VM networking slow:
I've got Supermicro H12SSL boards with 7302p processors. I don't have all the memory slots populated so memory throughput isn't as good as it could be, but it sounds like other people are having a similar experience to me.
That said...My nearly 12 year-old Xeon E3-1230v2 server crushes the Epyc in guest to guest traffic. Quick test:
Guest to Guest traffic is seemingly really impaired compared to the old Xeons.
Finally someone with similar experience, i was starting to think i am a bit crazy
There was update of xen hypervisor today, i will do another test....
-
Here's my thread from earlier this year:
https://xcp-ng.org/forum/topic/6916/tracking-down-poor-network-performance/11
Here's a referenced thread in that thread:
I'd be curious how this works in VMWare.