Epyc VM to VM networking slow
-
@gskger It does seem quite lower than the 5950x and the 7600 we tested, but:
- it is a zen1 if I'm not mistaken
- in the 4 threads case, with 8 threads on the physical CPU, the VMs and dom0 are actually sharing ressources
- for single thread I guess the generation and memory speed could explain the difference.
I would say that this confirms these ryzen cpus are not really impacted either.
Thanks for sharing, I'll update the table tomorrow.
-
@bleader you are correct, the V1756B is a low power (45W TDP) desktop CPU of the AMD Ryzen embedded v1000 series based on the ZEN microarchitecture with 4 cores and 8 threads. It operates at a base freqeuncy of 3.25 GHz and a boost frequency of 3.6 GHz max. The HP T740 thin client is a capable low power, low noise computer for running XCP-ng in a homelab, but it's not a real match for serious AMD desktop or server CPUs.
-
@gskger Returning to this:
Our backups on our business deployment, 2 HPE, (,ProLiant DL325 Gen10 Plus v2,) with AMD EPYC 7543P 32-Core Processor connected via redundant 10G nic's and switched to 10G NAS (QNAP, Synology) for storage and backups we get backup speeds of 80-90MiB/s tops with NBP.
On my Homelab with Protectli Mini PC connected via 10G also to 10G QNAP I get 250-300 MiB/s !!!!!!
This really is a problem for us now since we started with XCPNG 1 yr ago. Slow backup/restore speeds are a hindrance in our backup strategy.
Now that I switched my Homelab from Proxmox (using it for 3yrs) to XCPNG I stumbled upn this speed difference and it is incredible.
I wonder if it is not also related to this issue with EPYC networking.
P.S: I have opened a ticket BUT I wanted ti share this here also.
-
@manilx hi, I am working on the backup side, that is a very interesting finding. I have some question to rule out some hypothesis :
What storage do you use on both side ? iscsi / nfs ?
Is XO running on the master ? -
@bleader do you remember if we also had slower network speed between a VM and the Dom0 or only between 2 regular guests?
-
-
Not necessarily. XOA is a VM, but it's communicating with the Dom0, which is a VM indeed, but not a regular one. Could have been interesting to check if XOA VM is not sitting on the same host it's doing a backup, to see the result.
-
@olivierlambert vm to host is impacted too, althrough less, reaching over 10Gbps on a zen2 epyc.
-
@florent Hi,
Both storages are NFS, all connections 10G.
On both cases XO/XOA is running on the master. -
@olivierlambert I have tested this already. It doesn't matter if the XOA VM is on the master or another host. Backup speeds are "the same"
-
@olivierlambert as @bleader mentioned. All testing shows that it is any VM networking at all. Vm to vm, vm to host, vm to external appliances are all equally affected. Just that vm to vm issue is half the bandwidth of all other usecases since i has to handle traffic to both VMs and as such is found faster. But no matter how the VM communicates there is an upper roof bandwidth limit that is VERY low.
-
@Seneram As explained, we have been living with this for 1yr now but at the time Vates told us that all was OK and that backup speeds were normal at 80-90 MiB/s.
It was just NOW that I have it running at the HomeLab on crap Intel PC's ( ) that I see that HUGE speed difference. And this thread has opened my eyes also.....
-
While I'm very happy to see this getting some attention now, I am a bit disappointed that this has been reported for so long (easily two years or more) and is only now getting serious attention. Hopefully it will be resolved fairly soon.
That said...If you need high-speed networking in Epyc VM's now, SR-IOV can be your friend. Using ConnectX-4 25Gb cards I can hit 22-23Gb/s with guest VM's. Obviously SR-IOV brings along a whole other set of issues, but it's a way to get fast networking today.
-
@JamesG this bug has not been reported for two years. This thread is 6 months and our big report is open about the same amount of time.
It has had excellent attention since day one of us reporting it .
-
@Seneram If you search the forum you'll find other topics that discuss this. In January/February 2023 I reported it myself because I was trying to build a cluster that needed high-performance networking and found that the VM's couldn't do it. While researching the issue then, I seem to recall seeing other topics from a year or so prior to that.
Just because this one thread isn't two years old doesn't mean this is the only topic reporting the issue.
-
@JamesG As of now, we roughly spent 50k€ on this issue already (in time and expenses), so your impression of something not taken seriously is a bit wrong. If you want us to speed up, I'll be happy to get even more budget
Chasing those very low level CPUs architecture issues are really costly.
-
@JamesG sure but none of those do concrete troubleshooting and digging to establish where it is and it also only seems like isolated issues and not something broad (while it is but people didnt look at it as such).
-
@olivierlambert I believe that this is costly, nevertheless it needs to be fixed, as this CPU will get more mainstream as time goes by and as such it's not only cost but a good investment. I'm sure you'll get to the bottom of this now that you're tackling it.
Looking forward to Ampere!
-
If I wouldn't be convinced to fix it, I wouldn't throw money & time to solve the problem
-
@manilx said in Epyc VM to VM networking slow:
@florent Hi,
Both storages are NFS, all connections 10G.
On both cases XO/XOA is running on the master.thank you for the test. At least it removed the easy fixes