Epyc VM to VM networking slow
-
@LennertvdBerg they are still trying to figure this one out.
And an estimated full fix is not in sight just yet from what i know. Atleast i havent been informed in my ticket with them about this. But i do know they are still working very hard on this.
-
That's correct, it's a long investigation that is very likely related to the AMD micro architecture itself. It's not a trivial thing to fix. We've seen various improvements here and there, but nothing big so far. We still work on it, and also, as Vates grows, we can have more resources to handle the issue.
-
Just out of curiosity, how is everyone that's experiencing this issue currently dealing with it while the issue is being investigated? I was sort of naively hoping that it would get sorted by the 8.3 release, but now that those hopes have been dashed I'm trying to see what options I have to work around the issue.
-
Spread network heavy VMs across the cluster as it is a per physical host limit and also changed our design a bit where we intended to have all levels of routers virtual we split out the core routers from that and they are physical.
-
@timewasted Thankfully our VM's are fine with a 1GB connection.
The exception being XOA itself during backups. We're getting max of 80-90MB/s speed. This is all on 10GB connections.
When I at my homelab with a measily Protectli VP6670 (management 2,5MB/s connection) I can fully saturate the network port with 200-300MB/s...... I'm sure it's because of this EPYC issue that we don't get more speed at the production site. -
@manilx is your nas virtualized on the host, or a separate physical box?
-
@sluflyer06 Slow Business: Our backup NAS'es: Synology DS3622XS and QNAP h1288X both connected via 10G to 10G switch, both HP EPYC hosts also connected to same switch via 10G.
Fast Homelab: backup NAS also the same QNAP via 10G and 2 Protectli VP6670 hosts connected on management interface via 2,5G
-
@manilx i dont think it is directly related due to just how low it is. But we also see similar "Lower than expected speeds" on backups
-
@Seneram I don't know what the issue is but the only difference is the host/CPU.....
On a beast of a host I get 1/3 the backup speed as on a mini-pc. This is with XOA/XO VM's on the hosts themselves. -
@manilx Oh i absolutely agree that it is an issue... Maybe i could see that due to backups being handled by the XOA VM then whatever is causing our slowdowns for network between VMs (And out of VMs to external) might impact the networking and/or the process of the XOA backup process too.
What do you think @olivierlambert is these perhaps directly related? It sure would explain our very low backup speeds aswell that we see, (we have fully loaded synology FS2500's (all flash) with write intensive SSDs in.)
-
It's really hard to answer. An easy way to test is to have XOA outside the master to get traffic going outside of it via physical NICs. If you have the same speed, it's unrelated.
-
@olivierlambert Define "Outside of master"? Because since backup traffic passes through the XOA VM if it is hitting this issue with VM traffic then that does not help does it?? Or do you mean outside of the pool entirely? As in a physical machine?
-
If XO sits outside the master, the traffic will has to go to a physical NIC (host to another host whre XO VM resides) and not from Dom0 to VM directly.
-
@olivierlambert Sure but the traffic still goes to a VM and through it and then out, That is also affected by the issue. Any traffic through a VM is affected by this bug as we have established earlier.
-
IIRC, as long as the traffic is going via a physical NIC, the impact is greatly reduced. That's why it's better to check with XO outside the master itself to get the traffic going outside the host. That's because of the NIC offload work.
-
@olivierlambert said in Epyc VM to VM networking slow:
IIRC, as long as the traffic is going via a physical NIC, the impact is greatly reduced. That's why it's better to check with XO outside the master itself to get the traffic going outside the host. That's because of the NIC offload work.
@Seneram What olivierlambert is saying is to have the XO/XOA on another system which the pool connects to, but outside outside of other pools.
-
@john-c You mean on another host not belonging to the "EPYC" pool? Could try that, I have XO running on a Protectli BUT they only have 1GB network.....
-
@manilx said in Epyc VM to VM networking slow:
@john-c You mean on another host not belonging to the "EPYC" pool? Could try that, I have XO running on a Protectli BUT they only have 1GB network.....
Yes. If you can do it on a Protectli with a 4 Port or 6 Port they can potentially have 2.5 GB/s Lan or even 10 GB/s LAN if using an SFP+ module.
-
@john-c Don't have one of those @office only 2 @homelab and needed there.
BUT I have a Minisforum NPB7 with 2,5GB NIC's.
Will install xcpng later today and try tomorrow.... -
@manilx said in Epyc VM to VM networking slow:
@john-c Don't have one of those @office only 2 @homelab and needed there.
BUT I have a Minisforum NPB7 with 2,5GB NIC's.
Will install xcpng later today and try tomorrow....You can even do it on another actual server as long as it is outside of the EPYC server pool(s) and best preferred as a non-affected CPU. As this will force it to use a physical interface.