Epyc VM to VM networking slow
-
@olivierlambert I have tested this already. It doesn't matter if the XOA VM is on the master or another host. Backup speeds are "the same"
-
@olivierlambert as @bleader mentioned. All testing shows that it is any VM networking at all. Vm to vm, vm to host, vm to external appliances are all equally affected. Just that vm to vm issue is half the bandwidth of all other usecases since i has to handle traffic to both VMs and as such is found faster. But no matter how the VM communicates there is an upper roof bandwidth limit that is VERY low.
-
@Seneram As explained, we have been living with this for 1yr now but at the time Vates told us that all was OK and that backup speeds were normal at 80-90 MiB/s.
It was just NOW that I have it running at the HomeLab on crap Intel PC's ( ) that I see that HUGE speed difference. And this thread has opened my eyes also.....
-
While I'm very happy to see this getting some attention now, I am a bit disappointed that this has been reported for so long (easily two years or more) and is only now getting serious attention. Hopefully it will be resolved fairly soon.
That said...If you need high-speed networking in Epyc VM's now, SR-IOV can be your friend. Using ConnectX-4 25Gb cards I can hit 22-23Gb/s with guest VM's. Obviously SR-IOV brings along a whole other set of issues, but it's a way to get fast networking today.
-
@JamesG this bug has not been reported for two years. This thread is 6 months and our big report is open about the same amount of time.
It has had excellent attention since day one of us reporting it .
-
@Seneram If you search the forum you'll find other topics that discuss this. In January/February 2023 I reported it myself because I was trying to build a cluster that needed high-performance networking and found that the VM's couldn't do it. While researching the issue then, I seem to recall seeing other topics from a year or so prior to that.
Just because this one thread isn't two years old doesn't mean this is the only topic reporting the issue.
-
@JamesG As of now, we roughly spent 50k€ on this issue already (in time and expenses), so your impression of something not taken seriously is a bit wrong. If you want us to speed up, I'll be happy to get even more budget
Chasing those very low level CPUs architecture issues are really costly.
-
@JamesG sure but none of those do concrete troubleshooting and digging to establish where it is and it also only seems like isolated issues and not something broad (while it is but people didnt look at it as such).
-
@olivierlambert I believe that this is costly, nevertheless it needs to be fixed, as this CPU will get more mainstream as time goes by and as such it's not only cost but a good investment. I'm sure you'll get to the bottom of this now that you're tackling it.
Looking forward to Ampere!
-
If I wouldn't be convinced to fix it, I wouldn't throw money & time to solve the problem
-
@manilx said in Epyc VM to VM networking slow:
@florent Hi,
Both storages are NFS, all connections 10G.
On both cases XO/XOA is running on the master.thank you for the test. At least it removed the easy fixes
-
@olivierlambert said in Epyc VM to VM networking slow:
If I wouldn't be convinced to fix it, I wouldn't throw money & time to solve the problem
I think everyone knows this. Nevertheless, it is frustrating anyone if it becomes a bottleneck.
I am curious, do we know if this happens on Xen systems, or if it happens on xcp-ng systems where Open vSwitch is not used?
-
It happens on all Xen version we tested, the issue is clearly inside the Xen Hypervisor, and related on how the netif calls are triggering something slow inside AMD EPYC CPUs (not even Ryzen ones)
-
@manilx do you use NBD for delta backups ?
in the advanced settings -
@florent Florent, yes I do use NBD for all backups. And checking the backup log of the completed jobs I see that NBD is being used.
-
@manilx said in Epyc VM to VM networking slow:
@florent Florent, yes I do use NBD for all backups. And checking the backup log of the completed jobs I see that NBD is being used.
could you test disabling NBD ?
-
This post is deleted! -
@manilx Running a test backup, one with NBD and then again without. Will report asap.
-
@florent Result of backup tests:
-
It may not be of any help, but wanted to add a little bit of info this anyway.
I'm seeing the same results on a set of Ubuntu 22.04 in my Threadripper based cluster. I didn't expect to see different results, since Threadripper is really just EPYC with some stuff turned off.
Specifically tested on a 16 core 1950X host and a 32 core 3970X host, both with 8 vCPUs on each VM, they topped out at 8 gigabit like most others are seeing.
Figured I'd add it in here.