Epyc VM to VM networking slow
-
@Forza The XOA backup performance is more related to processing and not network, at least as I understand it and have tested.
So I don't think you'll see much of a change there.
-
On Intel, the biggest bottleneck ATM is the export speed capability of the Dom0. On AMD, the backup speed is also affected by the lack if equivalent of iPAT in Intel, but it might depends also on other factors (backup repo speed etc.)
-
@olivierlambert Yeah so far backups have been fast enough to not pose some huge issue though.
IMO if you have a huge VM (many TB) it should just be dealt with on a NAS or something instead of a VHD.
Still glad that qcow2 is coming though!
-
Hello everyone!
It looks like we are also affected with the issue. A week of investigation led us to this thread and our test results are very close to what is described here.
We have VM routers based on OEL8 (tested with all available kernels), xcp-ng 8.2, AMD EPYC 7443P 24-Core Processor, NICs: BCM57416 NetXtreme-E Dual-Media 10G RDMA Ethernet Controller
Network performance has significantly degraded after moving the VMs to the EPYC hosts, tests were performed with iperf:
- iperf hosts outside of the hypevisor, VM-router on the hypervisor:
UDP and TCP: ~1.5Gbps - both iperf hosts and a router on the hypervisor:
UDP ~1.5Gbps, TCP ~7Gbps
We have tried SR-IOV and made a test with one 10Gb NIC and got ~9Gbps with TCP and around 4Gbps with UDP.
SR-IOV however seems not usable for us since it looks like we cannot use it over LACP which is required for other VMs redundancy. Alternatively we need to use some additional NICs to use with SR-IOV on our routers or seek for some other connection options within our datacenter.
- iperf hosts outside of the hypevisor, VM-router on the hypervisor:
-
Hello everyone!
Great news, now you have something to test: https://xcp-ng.org/forum/topic/10943/network-traffic-performance-on-amd-processors
Please go there, follow instruction carefully and report please!
-
@olivierlambert This is great, thanks for letting us know! I'll give this a shot in my lab as soon as I can.
-
The fix, which was proposed as a test to resolve some of the issues encountered, has been integrated into an official update candidate which will be released to everyone next time we publish updates. For more information on this update, you can consult the following post: https://xcp-ng.org/forum/post/96135
-
Hello!
First of all thank you very much for providing the fix!Following the blog post https://xcp-ng.org/blog/2025/09/01/september-2025-maintenance-update-for-xcp-ng-8-3/:
The change only affects Linux guests. To make it effective, their kernel must support the feature which enables this fix. Linux distributions that have recent enough kernels or apply fixes from the mainline LTS kernels support it. Older ones don't (example: Ubuntu 20.04). Some currently supported LTS distros don't have the required patch yet, notably RHEL 8 and 9 and their derivatives. This might change if we can convince them to apply the patch to their kernel.
Where can we find the list of the supported OS and kernels to run some?
-
Because that would be a pretty big list. Which distros you have in mind?
-
@olivierlambert we are using Oracle Linux, is OEL8/9/10 supported? As I understand OEL8 and 9 do contain a fix. We can also try with any other OS suggested by you. Thanks!
-
Question for @Team-Hypervisor-Kernel
-
OEL 8 & 9 wouldn't contain the fix unless they applied extra patches for this to the RHEL 8 & 9 kernel(s). I'll let the hypervisor team check the current status.
-
kernel-4.18.0-553.71.1.0.1.el8_10
(OL8) andkernel-5.14.0-570.37.1.0.1.el9_6
(OL9) do not contain the fix.kernel-6.12.0-55.29.1.0.1.el10_0
(OL10) does. -
-
Tested the new updates on my prod EPYC 7402P pool with
iperf3
. Seems like quite a good upliftUbuntu 24.04 VM (6 cores) -> bare metal server (6 cores) over a 2x25Gbit LACP link.
Pre-patch
- iperf3 -P1 : 9.72Gbit/s
- iperf3 -P6 : 14.6GBis/s
Post Patch
- iperf3 -P1 : 11.3GBit/s
- iperf3 -P6 : 24.2GBit/s
Ubuntu 24.04 VM (6 cores) -> Ubuntu 24.04 VM (6 cores) on the same host
Pre Patch
Forgot to test this...
Post Patch
- iperf3 -P1 : 13.7GBit/s
- iperf3 -P6 : 30.8GBit/s
- iperf3 -P24 : 40.4GBit/s
Our servers have
Last-Level Cache (LLC) as NUMA Node
enabled as most our VMs do not have huge amount of vCPUs assigned. This means for the EPYC 7402P (24c/48t) we have 8 NUMA nodes. We however do not usexl cpupool-numa-split
. -
That's nice! It means it scales relatively well with many threads, it's a good result
-
@olivierlambert Can we get an updated XOA with this patch?
-
@Forza What patch are you referring to that would relate to XOA?
-
@stormi said in Epyc VM to VM networking slow:
@Forza What patch are you referring to that would relate to XOA?
It seems only recent kernels can take advantage of the improvements. From the blog post mentioned above:
The change only affects Linux guests. To make it effective, their kernel must support the feature which enables this fix.
Iperf inside XOA is much slower than other VMs (like the Ubundu 24.04 above).
When I run on the same host (pool master) against our NFS SR:
Ubuntu 24.04 (kernel 6.8):
# iperf3 -c 10.12.9.4 - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bitrate Retr [ 5] 0.00-10.00 sec 16.0 GBytes 13.7 Gbits/sec 0 sender [ 5] 0.00-10.00 sec 16.0 GBytes 13.7 Gbits/sec receiver # iperf3 -c 10.12.9.4 P4 - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bitrate Retr [ 5] 0.00-10.00 sec 6.26 GBytes 5.37 Gbits/sec 8861 sender [ 5] 0.00-10.00 sec 6.25 GBytes 5.37 Gbits/sec receiver [ 7] 0.00-10.00 sec 8.57 GBytes 7.36 Gbits/sec 8372 sender [ 7] 0.00-10.00 sec 8.57 GBytes 7.36 Gbits/sec receiver [ 9] 0.00-10.00 sec 9.05 GBytes 7.77 Gbits/sec 10192 sender [ 9] 0.00-10.00 sec 9.05 GBytes 7.77 Gbits/sec receiver [ 11] 0.00-10.00 sec 6.12 GBytes 5.25 Gbits/sec 7144 sender [ 11] 0.00-10.00 sec 6.11 GBytes 5.25 Gbits/sec receiver [SUM] 0.00-10.00 sec 30.0 GBytes 25.8 Gbits/sec 34569 sender [SUM] 0.00-10.00 sec 30.0 GBytes 25.8 Gbits/sec receiver
XOA 2025.08 (kernel 6.1):
# iperf3 -c 10.12.9.4 - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bitrate Retr [ 5] 0.00-10.00 sec 6.26 GBytes 5.37 Gbits/sec 0 sender [ 5] 0.00-10.00 sec 6.25 GBytes 5.37 Gbits/sec receiver # iperf3 -c 10.12.9.4 -P4 - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bitrate Retr [ 5] 0.00-10.00 sec 4.71 GBytes 4.05 Gbits/sec 3987 sender [ 5] 0.00-10.00 sec 4.71 GBytes 4.05 Gbits/sec receiver [ 7] 0.00-10.00 sec 4.61 GBytes 3.96 Gbits/sec 3086 sender [ 7] 0.00-10.00 sec 4.61 GBytes 3.96 Gbits/sec receiver [ 9] 0.00-10.00 sec 6.77 GBytes 5.81 Gbits/sec 7745 sender [ 9] 0.00-10.00 sec 6.77 GBytes 5.81 Gbits/sec receiver [ 11] 0.00-10.00 sec 5.42 GBytes 4.65 Gbits/sec 629 sender [ 11] 0.00-10.00 sec 5.42 GBytes 4.65 Gbits/sec receiver [SUM] 0.00-10.00 sec 21.5 GBytes 18.5 Gbits/sec 15447 sender [SUM] 0.00-10.00 sec 21.5 GBytes 18.5 Gbits/sec receiver
-
XOA's kernel should have the capability already, as it's a Debian 12 with stock kernel. Also, the bottleneck is ONLY between VMs on the same host.