XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    Epyc VM to VM networking slow

    Scheduled Pinned Locked Moved Compute
    234 Posts 24 Posters 107.0k Views 27 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • TeddyAstieT Offline
      TeddyAstie Vates 🪐 XCP-ng Team Xen Guru
      last edited by TeddyAstie

      For those who have AMD EPYC 7003 (Zen 3 EPYCs), you may find in Processor settings in firmware

      • Enhanced REP MOVSB/STOSB (ERMS)
      • Fast Short REP MOVSB (FSRM)

      Which is apparently disabled by default.
      It could be interesting to enable them and see if it changes anything performance-wise. I am not sure if it's just for showing a flag, or if it changes anything in the CPU behavior though.

      You can also try REP-MOV/STOS Streaming to see it changes anything too.

      1 Reply Last reply Reply Quote 0
      • D Offline
        dknight-bg @TeddyAstie
        last edited by

        @TeddyAstie

        I'm attaching results with Epyc 9004 / AS-1015CS-TNR-EU / MB: H13SSW

        user cpu family market v2v 1T v2v 4T h2v 1T h2v 4T notes
        dknight-bg EPYC 9354P Zen4 server 5.10 G (130/150/250) 6.24 G (131/254/348) 11.1 G (0/131/216) 11.1 G (0/187/302) Disabled: Enhanced REP MOVSB/STOSB (ERMS), Fast Short REP MOVSB (FSRM)
        dknight-bg EPYC 9354P Zen4 server 6.71 G (112/223/269) 7.11 G (122/261/342) 11.3 G (0/145/190) 11.5G (0/179/282) Enabled: Enhanced REP MOVSB/STOSB (ERMS), Fast Short REP MOVSB (FSRM)

        I couldn't find a setting for REP-MOV/STOPS Streaming in the BIOS, nor in the MB manual.

        1 Reply Last reply Reply Quote 0
        • olivierlambertO Offline
          olivierlambert Vates 🪐 Co-Founder CEO
          last edited by

          In VM to VM it's already a "free" +30% perf for 1T, impressive. Also 14% for 4T, not bad.

          N 1 Reply Last reply Reply Quote 0
          • N Offline
            nicols @olivierlambert
            last edited by nicols

            @olivierlambert yes, but this is only a small improvement to overall problem. Still, compared to similar intel platform, v2v network performance is very low. Are there any news for the final solution to this problem?

            M 1 Reply Last reply Reply Quote 0
            • M Offline
              manilx @nicols
              last edited by

              @nicols We're on 2 HP's with AMD EPYC 7543P 32-Core and it is a pain.
              Had to deploy Intel server just for running the backups, which improved a lot (but that's stupid with 2 beasts of hosts)

              1 Reply Last reply Reply Quote 0
              • olivierlambertO Offline
                olivierlambert Vates 🪐 Co-Founder CEO
                last edited by

                No obvious solution yet, it's likely due to an architecture problem on AMD, because of CCDs and how CPUs are made. So the solution (if there's any) will be likely a sum of various small improvements to make it bearable.

                I'm going to Santa Clara to discuss that with AMD directly (among other things).

                M ForzaF 2 Replies Last reply Reply Quote 2
                • M Offline
                  manilx @olivierlambert
                  last edited by

                  @olivierlambert No more EPYC's here, that's for sure. In the future it'll be Intel again (even if they then belong to Broadcom and TMC :p)

                  1 Reply Last reply Reply Quote 0
                  • ForzaF Offline
                    Forza @olivierlambert
                    last edited by

                    @olivierlambert said in Epyc VM to VM networking slow:

                    No obvious solution yet, it's likely due to an architecture problem on AMD, because of CCDs and how CPUs are made. So the solution (if there's any) will be likely a sum of various small improvements to make it bearable.

                    I'm going to Santa Clara to discuss that with AMD directly (among other things).

                    Do we have other data to back this? The issue is not really common outside of Xen. I do hope some solution comes out from the meeting with AMD.

                    1 Reply Last reply Reply Quote 0
                    • olivierlambertO Offline
                      olivierlambert Vates 🪐 Co-Founder CEO
                      last edited by

                      If we become partners officially, we'll be able to have more advanced accesses with their teams. I still have hope, it's just that the pace isn't on me.

                      D ForzaF 2 Replies Last reply Reply Quote 0
                      • D Offline
                        Davidj 0 @olivierlambert
                        last edited by

                        @olivierlambert
                        Can we rule out extra_guest_irqs as the root cause of this problem?

                        https://docs.xcp-ng.org/compute/#nvme-storage-devices-on-linux

                        1 Reply Last reply Reply Quote 0
                        • olivierlambertO Offline
                          olivierlambert Vates 🪐 Co-Founder CEO
                          last edited by

                          It's probably completely unrelated, but feel free to test 🙂

                          1 Reply Last reply Reply Quote 0
                          • ForzaF Offline
                            Forza @olivierlambert
                            last edited by

                            @olivierlambert said in Epyc VM to VM networking slow:

                            If we become partners officially, we'll be able to have more advanced accesses with their teams. I still have hope, it's just that the pace isn't on me.

                            Hi, is there anything new to report on this? We have very powerful machines, but unfortunately limited by this stubborn issue.

                            M TeddyAstieT 2 Replies Last reply Reply Quote 0
                            • M Offline
                              manilx @Forza
                              last edited by

                              @Forza Dito. A 15.000€ EPYC HP monster is slower than a 1.600€ Protectli Intel...
                              This is a joke and had we known this we'd NEVER jumped on the AMD wagon 😞

                              1 Reply Last reply Reply Quote 0
                              • TeddyAstieT Offline
                                TeddyAstie Vates 🪐 XCP-ng Team Xen Guru @Forza
                                last edited by

                                @Forza said in Epyc VM to VM networking slow:

                                olivierlambert said in Epyc VM to VM networking slow:

                                If we become partners officially, we'll be able to have more advanced accesses with their teams. I still have hope, it's just that the pace isn't on me.

                                Hi, is there anything new to report on this? We have very powerful machines, but unfortunately limited by this stubborn issue.

                                Can you test https://xcp-ng.org/forum/topic/10862/early-testable-pvh-support ?

                                We observe very significant improvements on AMD EPYC with PVH.

                                We're still pin-pointing the issue with HVM, the current hypothesis is a issue regarding memory typing (grant-table accessed as uncacheable(UC) which is very slow) related to grant-table positionning in HVM.

                                ForzaF 1 Reply Last reply Reply Quote 0
                                • ForzaF Offline
                                  Forza @TeddyAstie
                                  last edited by

                                  @TeddyAstie Unfortunately not. This is a production pool on 8.2.1 so I do not want to try too experimental things.

                                  Do we know if the issue happens on plain Xen on a modern (6.12-15) dom0 kernel?

                                  1 Reply Last reply Reply Quote 0
                                  • olivierlambertO Offline
                                    olivierlambert Vates 🪐 Co-Founder CEO
                                    last edited by olivierlambert

                                    It's on any Xen and Linux version. Vates is now the spearhead on finding the problem and a solution, there's no upstream with a fix anywhere.

                                    ForzaF 1 Reply Last reply Reply Quote 1
                                    • ForzaF Offline
                                      Forza @olivierlambert
                                      last edited by

                                      OK, thanks for the update. I would be interesting to hear what AMD said about this issue.

                                      1 Reply Last reply Reply Quote 0
                                      • olivierlambertO Offline
                                        olivierlambert Vates 🪐 Co-Founder CEO
                                        last edited by olivierlambert

                                        Our most promising lead is that's due to the fact they do not have a feature Intel got, called iPAT.

                                        In very short (and probably too short to be entirely correct), is the fact that the grant tables in the guest (used to securely communicate between -in that case- the VM and the Dom0) is not cached by AMD CPU. And on AMD, there's no way to force a cache attribute on a guest memory access, unlike with Intel. So the grant table requests are not cached on AMD vs Intel, explain at least a part of the performance difference.

                                        What's next? Roger from Xen project pointed us in that direction, and he did a very crude patch that demonstrated that we tested internally, showing that's a promising lead (x5 perf in VM->Dom0 and near twice between VMs). Right now, we have multiple people working internally to make a "real" patch or at least something to "workaround" the issue if possible.

                                        So it's been few weeks since then, we are trying to figure (at Vates, again) what would be the best approach for AMD CPUs, to make a patch that could land upstream.

                                        linuxmooseL 1 Reply Last reply Reply Quote 4
                                        • linuxmooseL Offline
                                          linuxmoose @olivierlambert
                                          last edited by

                                          @olivierlambert I know that I am late to this thread, but I would like to ask if there is any realistic time estimate for a workable fix, or even for a temporary patch or workaround?
                                          We have been trialing XCP-ng with the compiled version of Xen Orchestra as a potential replacement for VMware in a fairly large multi-datacenter environment, before doing an "official" proof of concept. My concern is that we've done all of our testing on our freshly retired older Intel virtualization hosts - as we've just completed replacing everything with AMD EPYC-based servers. Until now, our only matter of concern has been the 2TB virtual disk limit. I feel that this is going to be a much larger issue for us. It sounds as if I need to pull in a pool of EPYC systems to expand our testing.
                                          Thanks in advance for any input or guesstimates that you may be able to provide.

                                          planedropP olivierlambertO 2 Replies Last reply Reply Quote 0
                                          • planedropP Offline
                                            planedrop Top contributor @linuxmoose
                                            last edited by

                                            @linuxmoose What kind of workloads are you needing network wise though? Like we aren't talking about unusable performance, it's just not as good as Intel.

                                            Unless you're doing higher bandwidth stuff I don't really foresee it posing much of an issue.

                                            linuxmooseL 1 Reply Last reply Reply Quote 0
                                            • First post
                                              Last post