Epyc VM to VM networking slow
-
@manilx said in Epyc VM to VM networking slow:
@john-c Proxmox host is a Protectli. All good. XOA will be on the single Intel host pool, no need for redundancy here.
XO on Proxmox for emergencies.....Remember: this is ALL a WORKAROUND for the stupid AMD EPYC bug!!!!!!
Not in the least the final solution.The final is XOA running on our EPYC production pool as it was
Alright in that case just the HPE ProLiant DL360 Gen10 as dedicated XO/XOA host. But bear in mind that when its updating the XCP-ng installed on it, the host will be unavailable and thus that instance of XO/XOA until the booting after reboot is complete.
-
@john-c Yes, obviously. For that I have XO on a mini-pc
-
@john-c @olivierlambert
One of our standard backup jobs. This is a 100% increase!!! On 1G lacp bond. Instead of 10G on EPYC host!1,5 yrs battling with this and in the end it's all due to the same issue as we now see.
-
@manilx it is deffo interesting to see more proof that this but may be wider than expected.
-
@manilx said in Epyc VM to VM networking slow:
@john-c @olivierlambert
One of our standard backup jobs. This is a 100% increase!!! On 1G lacp bond. Instead of 10G on EPYC host!1,5 yrs battling with this and in the end it's all due to the same issue as we now see.
Don't forget to also post the comparison and screenshot when you have fitted the 10G 4 Port NIC with the 2 lacp bonds on the Intel HPE!
-
@john-c WILL DO! I've told purchasing to order the card you recommended. Let's see how long that'll take....
EDIT: ordered on Amazon. Expect to be here 1st week Nov.
Will report back then pinging you.
-
@manilx said in Epyc VM to VM networking slow:
@john-c WILL DO! I've told purchasing to order the card you recommended. Let's see how long that'll take....
EDIT: ordered on Amazon. Expect to be here 1st week Nov.
Will report back then pinging you.
The Friday (1st November 2024) after this months update to Xen Orchestra or is it the week after?
- 19 days later
-
@manilx I've been waiting for your ping back with the report. Following you saying the first week in November 2024, now in the beginning of the 2nd week in November 2024.
I'm wondering how's it going please, anything holding it up?
-
@john-c Hi. Ordered from Amazon that day and after more than 2 weeks order was cancelled without notice from supplier. Reordered from another one and I'm still waiting....
Not easy to get one. -
@manilx said in Epyc VM to VM networking slow:
@john-c Hi. Ordered from Amazon that day and after more than 2 weeks order was cancelled without notice from supplier. Reordered from another one and I'm still waiting....
Not easy to get one.Thanks for your reply. I hope it goes well this time, anyway if it still proves difficult then you can go for another quad port 10Gbe NIC which is compatible to do the LACP 2 bond with.
If the selected quad port 10Gbe NIC is available on general sale, then you can get it through the supplier who provided you with your HPE Care Packs.
-
@john-c @olivierlambert Now we're talking!!!
Here are the results of a 2 VM Delta/NBD backup (initial one) using 2 10GB NICS in bond:
WHAT a difference, when we run XOA on an Intel host instead of an EPYC one, with backups.
I've told this from the beginning, that the slow backup speeds were due to the EPYC issue (as I got 200+ MB/s @home with measily Protectli's on 10G)
Looking on what the Synology gets: I get up to 500+ MB/s during the backup!
-
It's a good discovery that having XOA outside the pool can make the backup performance much better.
How is the problem solving going for the root cause? We too have quite poor network performance and would really like to see the end of this. Can we get a summary of the actions taken so far and what the prognosis is for a solution?
Did anyone try plain Xen on a new 6.x kernel to see if the networking is the same there?
- 20 days later
-
Is there a difference when running
iperf3
with-C bbr
flag on the client side ?
In my testing with AMD EPYC and some other CPUs, results are more consistent overall with BBR, and better on the AMD EPYC side (but no miracle, it's still far from perfect).AMD EPYC 7262
vm to vm, 4 threads, iperf3
Without BBR : 4.5-6 Gbps (sometimes more; varies a lot)
With BBR : 7-8 Gbps -
@TeddyAstie That is interesting. I had a look. The default seems to be
cubic
, butbbr
is available usingmodprobe tcp_bbr
. I also wonder if different queuing disciplines (tc qdisc
) can help. For example mqprio that spreads packes across the available NIC HW queues? -
@Forza the default one seems
cubic
which in my testing causes chaotic (either good or bad) network performance on XCP-ng (even on non-EPYC platforms) where BBR is more consistent (and also better on AMD EPYC).I also wonder if different queuing disciplines (tc qdisc) can help. For example mqprio that spreads packes across the available NIC HW queues?
Regarding PV network, I don't think queue management will change anything as netfront/netback is single-queue.It's multi-queue so maybe it changes something. -
For those who have AMD EPYC 7003 (Zen 3 EPYCs), you may find in Processor settings in firmware
- Enhanced REP MOVSB/STOSB (ERMS)
- Fast Short REP MOVSB (FSRM)
Which is apparently disabled by default.
It could be interesting to enable them and see if it changes anything performance-wise. I am not sure if it's just for showing a flag, or if it changes anything in the CPU behavior though.You can also try REP-MOV/STOS Streaming to see it changes anything too.