Epyc VM to VM networking slow
-
OK, thanks for the update. I would be interesting to hear what AMD said about this issue.
-
Our most promising lead is that's due to the fact they do not have a feature Intel got, called iPAT.
In very short (and probably too short to be entirely correct), is the fact that the grant tables in the guest (used to securely communicate between -in that case- the VM and the Dom0) is not cached by AMD CPU. And on AMD, there's no way to force a cache attribute on a guest memory access, unlike with Intel. So the grant table requests are not cached on AMD vs Intel, explain at least a part of the performance difference.
What's next? Roger from Xen project pointed us in that direction, and he did a very crude patch that demonstrated that we tested internally, showing that's a promising lead (x5 perf in VM->Dom0 and near twice between VMs). Right now, we have multiple people working internally to make a "real" patch or at least something to "workaround" the issue if possible.
So it's been few weeks since then, we are trying to figure (at Vates, again) what would be the best approach for AMD CPUs, to make a patch that could land upstream.
-
@olivierlambert I know that I am late to this thread, but I would like to ask if there is any realistic time estimate for a workable fix, or even for a temporary patch or workaround?
We have been trialing XCP-ng with the compiled version of Xen Orchestra as a potential replacement for VMware in a fairly large multi-datacenter environment, before doing an "official" proof of concept. My concern is that we've done all of our testing on our freshly retired older Intel virtualization hosts - as we've just completed replacing everything with AMD EPYC-based servers. Until now, our only matter of concern has been the 2TB virtual disk limit. I feel that this is going to be a much larger issue for us. It sounds as if I need to pull in a pool of EPYC systems to expand our testing.
Thanks in advance for any input or guesstimates that you may be able to provide. -
@linuxmoose What kind of workloads are you needing network wise though? Like we aren't talking about unusable performance, it's just not as good as Intel.
Unless you're doing higher bandwidth stuff I don't really foresee it posing much of an issue.
-
@linuxmoose said in Epyc VM to VM networking slow:
I feel that this is going to be a much larger issue for us.
Before that I would strongly encourage to test if it's really a problem, because it's really not in 90% of the use cases we've seen.
-
@olivierlambert Biggest issue is when it presents itself during backup, which are SLOW compared to Intel.
We've been at that since the beginning of our Vmware-XCPNG switch. Unfortunately (in a hindsight we chose AMD EPYC. Huge mistake!!)
-
I wasn't addressing you but @linuxmoose having theoretical concerns.
-
@olivierlambert Just added my pratical experience, which might interest. AND that was not a shot AGAINST xcpng. FAR FROM IT.
Switch, do it, tomorrow! Just don't choose EPYC.
-
Yes but I don't think there's a need to add another "layer on the cake" about this, you already know we are convinced already about getting it fixed in priority and investing nearly half a million already to fix it.
It's really rare when it could become a blocker, and the proof is it took years to people to even notice (even us, having an EPYC production infrastructure).
-
@olivierlambert OK, won't comment any more... You can delete my comment at your leisure.
-
That's not what I meant either. I was just answering initially about the theoretical concern, mostly following @planedrop logical question, in other words "before fearing something, test it for real and see the real impact toward your requirements".
We already know you are affected, it's like you were afraid we wouldn't care about your use case while we are already deeply invested into getting a solution to a tricky problem that could affect some people in some situations.
-
@olivierlambert Nope, that was not on my mind! I KNOW you are taking care of this.
I just wanted to add my practical experience to someone asking theoretically if there is an issue. And for us there is one during backups, not during normal operations.
Backups taking 2-3 times longer on EPYC than on Intel might be an issue for someone thinking on deploying on "fairly large multi-datacenter environment".
No harm meant and no finger was pointed. Just a honest my 2 cents.
-
@olivierlambert Yeah and on this note I can say my entire lab is Threadripper, so suffers from the same issue, and it hasn't created any real world problems for me.
-
@planedrop It is a mix of anything and everything one would find in an enterprise datacenter. Lots of application server to database server connections, and we are also running Rancher with Longhorn storage, which is particularly sensitive to latency, but mostly of the storage type - not networking latency. We will just have to test and see if it is indeed an issue. If I understand correctly, the main issue is with performance between VMs on the same virtualization host. In that case, we can use rules to place application and db servers on separate hosts for better performance. Ironically, that is the opposite of the configuration we currently use with VMware.
Anyway, we will just have to do some testing to see if it is an issue worth stressing over for us.
Thanks. -
@olivierlambert said in Epyc VM to VM networking slow:
I feel that this is going to be a much larger issue for us.
Before that I would strongly encourage to test if it's really a problem, because it's really not in 90% of the use cases we've seen.
Thanks @olivierlambert - that is definitely the plan. I still see XCP-ng as the best alternative we've considered so far.
-
@linuxmoose Yeah testing it is definitely the way to go here, I don't think you'll see very many issues TBH.
It's worth noting that the speeds being seen were still multi gigabit, so again it's not like things are dead slow.
-
In the mean time, we'll keep you posted in here on our patches to test. So far, it's likely a patch in the guest (ie the Linux kernel) that might improve nicely the performance. And if you need very high NIC perf, you can use SR-IOV and enjoy native NIC perf, even if there's a some extra constraints.
-
@olivierlambert That's a good point about SR-IOV, would be a good workaround if super fast NIC speeds are needed in a guest specifically.
-
Would sr-iov with xoa help backup speeds?
-
@Forza said in Epyc VM to VM networking slow:
Would sr-iov with xoa help backup speeds?
If you specify the SR-IOV NIC, it will be wire-speed.