This is over 4 months old, and is affecting a LOT of my customers.
It is a BIG problem for my company.
Is there anything that we can do to help resolve this?
This is over 4 months old, and is affecting a LOT of my customers.
It is a BIG problem for my company.
Is there anything that we can do to help resolve this?
@nicols said in Epyc VM to VM networking slow:
Today, i got my hands on HPE ProLiant DL325 Gen10 server with Epyc 7502 32 core (64 threads) CPU. I have installed XCP-ng 8.2.1, and applied all pathes wth yum update. Installed 2 Debian and 2 Windows 10 VMs. Results are very similar:
Linux to Linux VM on single host: 4 Gbit/sec on single thread, max 6 Gbit/sec on mulčtiple threads.
I have tried various amountss of VCPU (2,4,8.12,16) and various combinations of iperf threads.Windws to Windows VM: 3.5 Gbit/sec on single thread, and 18 Gbit/sec um multiple threads.
All this was with default bios settings, just changed to legacy boot.
Wet performance tuning in bios (c states and other settings), i believe i can get 10-15% more, i will try that tommorow.So, i think this confirms that this is not Supermicro related problem, but something on relation Xen (hypervisor?) <-> AMD CPU.
Same hardware, VmWare ESXi 8.0, Debian 12 VMs with 4 vCPU and 2GB RAM.
root@debian-on-vmwareto:~# iperf -c 10.33.65.159
------------------------------------------------------------
Client connecting to 10.33.65.159, TCP port 5001
TCP window size: 16.0 KByte (default)
------------------------------------------------------------
[ 1] local 10.33.65.160 port 59124 connected with 10.33.65.159 port 5001 (icwnd/mss/irtt=14/1448/164)
[ ID] Interval Transfer Bandwidth
[ 1] 0.0000-10.0094 sec 29.0 GBytes 24.9 Gbits/sec
with more threads:
root@debian-on-vmwareto:~# iperf -c 10.33.65.159 -P4
------------------------------------------------------------
Client connecting to 10.33.65.159, TCP port 5001
TCP window size: 16.0 KByte (default)
------------------------------------------------------------
[ 3] local 10.33.65.160 port 46444 connected with 10.33.65.159 port 5001 (icwnd/mss/irtt=14/1448/107)
[ 1] local 10.33.65.160 port 46446 connected with 10.33.65.159 port 5001 (icwnd/mss/irtt=14/1448/130)
[ 2] local 10.33.65.160 port 46442 connected with 10.33.65.159 port 5001 (icwnd/mss/irtt=14/1448/136)
[ 4] local 10.33.65.160 port 46468 connected with 10.33.65.159 port 5001 (icwnd/mss/irtt=14/1448/74)
[ ID] Interval Transfer Bandwidth
[ 3] 0.0000-10.0142 sec 7.59 GBytes 6.51 Gbits/sec
[ 1] 0.0000-10.0142 sec 15.5 GBytes 13.3 Gbits/sec
[ 4] 0.0000-10.0136 sec 7.89 GBytes 6.77 Gbits/sec
[ 2] 0.0000-10.0142 sec 14.7 GBytes 12.6 Gbits/sec
[SUM] 0.0000-10.0018 sec 45.6 GBytes 39.2 Gbits/sec
Will try with with windows VMs next.
I know it is apples and oranges, but i would accept speed difference of abbout 10-20%.
Here, we are talking about more tahn 600% difference.
@olivierlambert said in Epyc VM to VM networking slow:
Also, about comparing to KVM doesn't make sense at all: there's no such network/disk isolation in KVM, so you can do zero copy, which will yield to much better performances (at the price of the thin isolation).
Yes, we are all aware of KVM / Xen differences, BUT: there is something important here to consider: I am getting similar result in Winsows VM to VM network traffic on Prox and XCP-ng. This proves that network/disk isolation on XCP-ng isn't slowing anything down.
Prox/KVM Linux VM to VM network speed is the same as with Windows VMs.
Problem is much slower network traffic on Linux VM to VM on single XCP-ng host.
Today, i got my hands on HPE ProLiant DL325 Gen10 server with Epyc 7502 32 core (64 threads) CPU. I have installed XCP-ng 8.2.1, and applied all pathes wth yum update. Installed 2 Debian and 2 Windows 10 VMs. Results are very similar:
Linux to Linux VM on single host: 4 Gbit/sec on single thread, max 6 Gbit/sec on mulčtiple threads.
I have tried various amountss of VCPU (2,4,8.12,16) and various combinations of iperf threads.
Windws to Windows VM: 3.5 Gbit/sec on single thread, and 18 Gbit/sec um multiple threads.
All this was with default bios settings, just changed to legacy boot.
Wet performance tuning in bios (c states and other settings), i believe i can get 10-15% more, i will try that tommorow.
So, i think this confirms that this is not Supermicro related problem, but something on relation Xen (hypervisor?) <-> AMD CPU.
@olivierlambert said in XCP-ng not booting when IPMI host interface is disabled on Supermicro H12 board:
Okay worth trying with an installed 8.3 to see if you still have the crash. Don't forget to update it.
Sorry, I didn't had time to test it with 8.3, but i applied October 2023 at friday, and after that XCP-ng boots normaly, with or without IPMI host interface.
@JamesG said in Epyc VM to VM networking slow:
@Danp That's interesting...
Yes, it is, but as i wrote earlier, i get full 21 Gbps Linux VM to VM on Proxmox/KVM (on exact same host, same BIOS settings), so i think it must be some problem on relation Epyc - Xen hypervisor....
@JamesG said in Epyc VM to VM networking slow:
@nicols give me your VM specs and I'll run the exact same tests. vCPU, RAM, anything else relevant.
Debian 12: 16 VCPU, 2GB RAM
Windows 10 pro: 16 VCPU, 8GB RAM, citrix vmtols 9.3.1
On Linux Debian there is no much difference between 8 and 16 VCPU
On Windows 10, 8 VCPU: 16 Gbit/sec, 16 VCPU: 21 Gbit/sec
@nicols said in Epyc VM to VM networking slow:
@nicols said in Epyc VM to VM networking slow:
@JamesG said in Epyc VM to VM networking slow:
@olivierlambert With a billion threads.
nope, Win 10 VM with 4 or 8 VCPU and 8GB RAM
But, with billion threads in Linux VM, speed increases up to 8 threats, then it drops again.This is with 1 and 16 threads:
also, this:
https://nextcloud.openit.hr/s/CptZpTt4jbWcRPX
is cpu load on host during 2 VM linux doing 16 thread iperf (with cumulative speed of pathetic 4 Gbit/sec).
It seems way to high for this kind of job?
@nicols said in Epyc VM to VM networking slow:
@JamesG said in Epyc VM to VM networking slow:
@olivierlambert With a billion threads.
nope, Win 10 VM with 4 or 8 VCPU and 8GB RAM
But, with billion threads in Linux VM, speed increases up to 8 threats, then it drops again.
This is with 1 and 16 threads:
@JamesG said in Epyc VM to VM networking slow:
@olivierlambert With a billion threads.
nope, Win 10 VM with 4 or 8 VCPU and 8GB RAM
But, with billion threads in Linux VM, speed increases up to 8 threats, then it drops again.