Epyc VM to VM networking slow
-
If XO sits outside the master, the traffic will has to go to a physical NIC (host to another host whre XO VM resides) and not from Dom0 to VM directly.
-
@olivierlambert Sure but the traffic still goes to a VM and through it and then out, That is also affected by the issue. Any traffic through a VM is affected by this bug as we have established earlier.
-
IIRC, as long as the traffic is going via a physical NIC, the impact is greatly reduced. That's why it's better to check with XO outside the master itself to get the traffic going outside the host. That's because of the NIC offload work.
-
@olivierlambert said in Epyc VM to VM networking slow:
IIRC, as long as the traffic is going via a physical NIC, the impact is greatly reduced. That's why it's better to check with XO outside the master itself to get the traffic going outside the host. That's because of the NIC offload work.
@Seneram What olivierlambert is saying is to have the XO/XOA on another system which the pool connects to, but outside outside of other pools.
-
@john-c You mean on another host not belonging to the "EPYC" pool? Could try that, I have XO running on a Protectli BUT they only have 1GB network.....
-
@manilx said in Epyc VM to VM networking slow:
@john-c You mean on another host not belonging to the "EPYC" pool? Could try that, I have XO running on a Protectli BUT they only have 1GB network.....
Yes. If you can do it on a Protectli with a 4 Port or 6 Port they can potentially have 2.5 GB/s Lan or even 10 GB/s LAN if using an SFP+ module.
-
@john-c Don't have one of those @office only 2 @homelab and needed there.
BUT I have a Minisforum NPB7 with 2,5GB NIC's.
Will install xcpng later today and try tomorrow.... -
@manilx said in Epyc VM to VM networking slow:
@john-c Don't have one of those @office only 2 @homelab and needed there.
BUT I have a Minisforum NPB7 with 2,5GB NIC's.
Will install xcpng later today and try tomorrow....You can even do it on another actual server as long as it is outside of the EPYC server pool(s) and best preferred as a non-affected CPU. As this will force it to use a physical interface.
-
@john-c Could do it to our DR-pools on older Intel hw.
But as it is I prefer to try the Minisforum, which is fast and I can connect it to the same switches as the Business Epyc pool.
Will revert tomorrow with results.
-
@manilx said in Epyc VM to VM networking slow:
@john-c Could do it to our DR-pools on older Intel hw.
But as it is I prefer to try the Minisforum, which is fast and I can connect it to the same switches as the Business Epyc pool.
Will revert tomorrow with results.
Depending on the results another server (or an additional one) like the Intel ones from DR Pool or a modern Intel based CPU server hardware to host it would be un-affected. Then have the AMD EPYC pools connect to the Intel server with the Intel hosting XO/XOA. With the affinity for XO/XOA VM set to the Intel based host for the duration of the EPYC bug.
-
@john-c That would only work for someone where it is possible to have non epyc machines handle this tho Unfortunately not an option for us.
-
@Seneram said in Epyc VM to VM networking slow:
@john-c That would only work for someone where it is possible to have non epyc machines handle this tho Unfortunately not an option for us.
So would need to be a separate host outside of the other EPYC pool(s), so in other words can be another AMD or an Intel. But best Intel, though as long as its outside of all the other pools and just hosting the XO/XOA VM.
Then have the other EPYC pool(s) connect to the XO/XOA VM on its separate hosting system. That way there will be less of an impact, as the other EPYC servers will have to use their physical NICs in order to connect to XO/XOA.
-
@john-c If this Works the 600€ Minisforum will have a new job! Sitting idle now in the cabinet.
-
@manilx said in Epyc VM to VM networking slow:
@john-c If this Works the 600€ Minisforum will have a new job! Sitting idle now in the cabinet.
Depending on results of the test I would recommend an actual server hardware for hosting the XO/XOA VM as the server grade hardware receive more QA, than desktops, laptops and mini computers. So you may need to get an additional server hardware to use for XO/XOA VM.
Also actual server hardware also have out of band management controllers (BMCs). The non server grade hardware often don't have this functionality so remotely managing, and monitoring them is much harder or even impossible.
Finally on top of these they (server hardware) are more likely to be on the XenServer HCL so likely to get the paid support from Vates, through it.
-
@olivierlambert The user manilx is going to try a test with running XO/XOA on an Intel based host, to see what the results are like with the EPYC pool(s) connecting to it.
They're going run the test tomorrow and report back with results. I have posted my recommendation (opinion) above to maybe have a Intel based server grade hardware host running, the XO/XOA VM for the duration of the EPYC VM to VM networking bug.
What do you think?
-
@john-c Minisforum NPB7 all set up with xcp-ng 8.2 LTE
Ready to connect to network tomorrow. Will have results before 10:00 GMT. -
@john-c Let's see what the test will do. If this fixes it I will remove one host (HPE ProLiant DL360 Gen10) out of the 3 host DR pool and dedicate it to this.
-
@john-c I did a quick test. Installed XO on our DR pool. Run a new full backup, NIC's are 1G only, BUT the backup fully saturates the NIC, double the performance of XOA running on EPYC hosts!!
Tomorrow as said I'll run on Minisforum with 2,5G NIC's. Should saturate them also (does @homelab).
-
@manilx said in Epyc VM to VM networking slow:
@john-c I did a quick test. Installed XO on our DR pool. Run a new full backup, NIC's are 1G only, BUT the backup fully saturates the NIC, double the performance of XOA running on EPYC hosts!!
Tomorrow as said I'll run on Minisforum with 2,5G NIC's. Should saturate them also (does @homelab).
If the HPE ProLiant DL360 Gen10 when dedicated were to receive a 10 GB/s PCIe Ethernet NIC it would increase that further. Especially if its a 4 port version of that 10 GB/s PCIe NIC with two pairs of each port in 2 LACP bonds.
-
@john-c If the test goes well, this is what we'll do. Buy a 10G NIC (4 ports)