Epyc VM to VM networking slow
-
@Seneram I don't know what the issue is but the only difference is the host/CPU.....
On a beast of a host I get 1/3 the backup speed as on a mini-pc. This is with XOA/XO VM's on the hosts themselves. -
@manilx Oh i absolutely agree that it is an issue... Maybe i could see that due to backups being handled by the XOA VM then whatever is causing our slowdowns for network between VMs (And out of VMs to external) might impact the networking and/or the process of the XOA backup process too.
What do you think @olivierlambert is these perhaps directly related? It sure would explain our very low backup speeds aswell that we see, (we have fully loaded synology FS2500's (all flash) with write intensive SSDs in.)
-
It's really hard to answer. An easy way to test is to have XOA outside the master to get traffic going outside of it via physical NICs. If you have the same speed, it's unrelated.
-
@olivierlambert Define "Outside of master"? Because since backup traffic passes through the XOA VM if it is hitting this issue with VM traffic then that does not help does it?? Or do you mean outside of the pool entirely? As in a physical machine?
-
If XO sits outside the master, the traffic will has to go to a physical NIC (host to another host whre XO VM resides) and not from Dom0 to VM directly.
-
@olivierlambert Sure but the traffic still goes to a VM and through it and then out, That is also affected by the issue. Any traffic through a VM is affected by this bug as we have established earlier.
-
IIRC, as long as the traffic is going via a physical NIC, the impact is greatly reduced. That's why it's better to check with XO outside the master itself to get the traffic going outside the host. That's because of the NIC offload work.
-
@olivierlambert said in Epyc VM to VM networking slow:
IIRC, as long as the traffic is going via a physical NIC, the impact is greatly reduced. That's why it's better to check with XO outside the master itself to get the traffic going outside the host. That's because of the NIC offload work.
@Seneram What olivierlambert is saying is to have the XO/XOA on another system which the pool connects to, but outside outside of other pools.
-
@john-c You mean on another host not belonging to the "EPYC" pool? Could try that, I have XO running on a Protectli BUT they only have 1GB network.....
-
@manilx said in Epyc VM to VM networking slow:
@john-c You mean on another host not belonging to the "EPYC" pool? Could try that, I have XO running on a Protectli BUT they only have 1GB network.....
Yes. If you can do it on a Protectli with a 4 Port or 6 Port they can potentially have 2.5 GB/s Lan or even 10 GB/s LAN if using an SFP+ module.
-
@john-c Don't have one of those @office only 2 @homelab and needed there.
BUT I have a Minisforum NPB7 with 2,5GB NIC's.
Will install xcpng later today and try tomorrow.... -
@manilx said in Epyc VM to VM networking slow:
@john-c Don't have one of those @office only 2 @homelab and needed there.
BUT I have a Minisforum NPB7 with 2,5GB NIC's.
Will install xcpng later today and try tomorrow....You can even do it on another actual server as long as it is outside of the EPYC server pool(s) and best preferred as a non-affected CPU. As this will force it to use a physical interface.
-
@john-c Could do it to our DR-pools on older Intel hw.
But as it is I prefer to try the Minisforum, which is fast and I can connect it to the same switches as the Business Epyc pool.
Will revert tomorrow with results.
-
@manilx said in Epyc VM to VM networking slow:
@john-c Could do it to our DR-pools on older Intel hw.
But as it is I prefer to try the Minisforum, which is fast and I can connect it to the same switches as the Business Epyc pool.
Will revert tomorrow with results.
Depending on the results another server (or an additional one) like the Intel ones from DR Pool or a modern Intel based CPU server hardware to host it would be un-affected. Then have the AMD EPYC pools connect to the Intel server with the Intel hosting XO/XOA. With the affinity for XO/XOA VM set to the Intel based host for the duration of the EPYC bug.
-
@john-c That would only work for someone where it is possible to have non epyc machines handle this tho Unfortunately not an option for us.
-
@Seneram said in Epyc VM to VM networking slow:
@john-c That would only work for someone where it is possible to have non epyc machines handle this tho Unfortunately not an option for us.
So would need to be a separate host outside of the other EPYC pool(s), so in other words can be another AMD or an Intel. But best Intel, though as long as its outside of all the other pools and just hosting the XO/XOA VM.
Then have the other EPYC pool(s) connect to the XO/XOA VM on its separate hosting system. That way there will be less of an impact, as the other EPYC servers will have to use their physical NICs in order to connect to XO/XOA.
-
@john-c If this Works the 600€ Minisforum will have a new job! Sitting idle now in the cabinet.
-
@manilx said in Epyc VM to VM networking slow:
@john-c If this Works the 600€ Minisforum will have a new job! Sitting idle now in the cabinet.
Depending on results of the test I would recommend an actual server hardware for hosting the XO/XOA VM as the server grade hardware receive more QA, than desktops, laptops and mini computers. So you may need to get an additional server hardware to use for XO/XOA VM.
Also actual server hardware also have out of band management controllers (BMCs). The non server grade hardware often don't have this functionality so remotely managing, and monitoring them is much harder or even impossible.
Finally on top of these they (server hardware) are more likely to be on the XenServer HCL so likely to get the paid support from Vates, through it.
-
@olivierlambert The user manilx is going to try a test with running XO/XOA on an Intel based host, to see what the results are like with the EPYC pool(s) connecting to it.
They're going run the test tomorrow and report back with results. I have posted my recommendation (opinion) above to maybe have a Intel based server grade hardware host running, the XO/XOA VM for the duration of the EPYC VM to VM networking bug.
What do you think?
-
@john-c Minisforum NPB7 all set up with xcp-ng 8.2 LTE
Ready to connect to network tomorrow. Will have results before 10:00 GMT.