XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    Epyc VM to VM networking slow

    Scheduled Pinned Locked Moved Compute
    234 Posts 24 Posters 107.6k Views 27 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • olivierlambertO Offline
      olivierlambert Vates 🪐 Co-Founder CEO @linuxmoose
      last edited by

      @linuxmoose said in Epyc VM to VM networking slow:

      I feel that this is going to be a much larger issue for us.

      Before that I would strongly encourage to test if it's really a problem, because it's really not in 90% of the use cases we've seen.

      M linuxmooseL 2 Replies Last reply Reply Quote 1
      • M Offline
        manilx @olivierlambert
        last edited by

        @olivierlambert Biggest issue is when it presents itself during backup, which are SLOW compared to Intel.

        We've been at that since the beginning of our Vmware-XCPNG switch. Unfortunately (in a hindsight we chose AMD EPYC. Huge mistake!!)

        1 Reply Last reply Reply Quote 0
        • olivierlambertO Offline
          olivierlambert Vates 🪐 Co-Founder CEO
          last edited by

          I wasn't addressing you but @linuxmoose having theoretical concerns.

          M 1 Reply Last reply Reply Quote 0
          • M Offline
            manilx @olivierlambert
            last edited by

            @olivierlambert Just added my pratical experience, which might interest. AND that was not a shot AGAINST xcpng. FAR FROM IT.

            Switch, do it, tomorrow! Just don't choose EPYC.

            1 Reply Last reply Reply Quote 0
            • olivierlambertO Offline
              olivierlambert Vates 🪐 Co-Founder CEO
              last edited by

              Yes but I don't think there's a need to add another "layer on the cake" about this, you already know we are convinced already about getting it fixed in priority and investing nearly half a million already to fix it.

              It's really rare when it could become a blocker, and the proof is it took years to people to even notice (even us, having an EPYC production infrastructure).

              M 1 Reply Last reply Reply Quote 0
              • M Offline
                manilx @olivierlambert
                last edited by

                @olivierlambert OK, won't comment any more... You can delete my comment at your leisure.

                1 Reply Last reply Reply Quote 0
                • olivierlambertO Offline
                  olivierlambert Vates 🪐 Co-Founder CEO
                  last edited by

                  That's not what I meant either. I was just answering initially about the theoretical concern, mostly following @planedrop logical question, in other words "before fearing something, test it for real and see the real impact toward your requirements".

                  We already know you are affected, it's like you were afraid we wouldn't care about your use case while we are already deeply invested into getting a solution to a tricky problem that could affect some people in some situations.

                  M planedropP 2 Replies Last reply Reply Quote 1
                  • M Offline
                    manilx @olivierlambert
                    last edited by

                    @olivierlambert Nope, that was not on my mind! I KNOW you are taking care of this.

                    I just wanted to add my practical experience to someone asking theoretically if there is an issue. And for us there is one during backups, not during normal operations.

                    Backups taking 2-3 times longer on EPYC than on Intel might be an issue for someone thinking on deploying on "fairly large multi-datacenter environment".

                    No harm meant and no finger was pointed. Just a honest my 2 cents.

                    1 Reply Last reply Reply Quote 1
                    • planedropP Offline
                      planedrop Top contributor @olivierlambert
                      last edited by

                      @olivierlambert Yeah and on this note I can say my entire lab is Threadripper, so suffers from the same issue, and it hasn't created any real world problems for me.

                      1 Reply Last reply Reply Quote 0
                      • linuxmooseL Offline
                        linuxmoose @planedrop
                        last edited by

                        @planedrop It is a mix of anything and everything one would find in an enterprise datacenter. Lots of application server to database server connections, and we are also running Rancher with Longhorn storage, which is particularly sensitive to latency, but mostly of the storage type - not networking latency. We will just have to test and see if it is indeed an issue. If I understand correctly, the main issue is with performance between VMs on the same virtualization host. In that case, we can use rules to place application and db servers on separate hosts for better performance. Ironically, that is the opposite of the configuration we currently use with VMware.
                        Anyway, we will just have to do some testing to see if it is an issue worth stressing over for us.
                        Thanks.

                        planedropP 1 Reply Last reply Reply Quote 1
                        • linuxmooseL Offline
                          linuxmoose @olivierlambert
                          last edited by

                          @olivierlambert said in Epyc VM to VM networking slow:

                          I feel that this is going to be a much larger issue for us.

                          Before that I would strongly encourage to test if it's really a problem, because it's really not in 90% of the use cases we've seen.

                          Thanks @olivierlambert - that is definitely the plan. I still see XCP-ng as the best alternative we've considered so far.

                          1 Reply Last reply Reply Quote 1
                          • planedropP Offline
                            planedrop Top contributor @linuxmoose
                            last edited by

                            @linuxmoose Yeah testing it is definitely the way to go here, I don't think you'll see very many issues TBH.

                            It's worth noting that the speeds being seen were still multi gigabit, so again it's not like things are dead slow.

                            1 Reply Last reply Reply Quote 0
                            • olivierlambertO Offline
                              olivierlambert Vates 🪐 Co-Founder CEO
                              last edited by

                              In the mean time, we'll keep you posted in here on our patches to test. So far, it's likely a patch in the guest (ie the Linux kernel) that might improve nicely the performance. And if you need very high NIC perf, you can use SR-IOV and enjoy native NIC perf, even if there's a some extra constraints.

                              planedropP 1 Reply Last reply Reply Quote 1
                              • planedropP Offline
                                planedrop Top contributor @olivierlambert
                                last edited by

                                @olivierlambert That's a good point about SR-IOV, would be a good workaround if super fast NIC speeds are needed in a guest specifically.

                                ForzaF 1 Reply Last reply Reply Quote 1
                                • ForzaF Offline
                                  Forza @planedrop
                                  last edited by

                                  Would sr-iov with xoa help backup speeds?

                                  J planedropP 2 Replies Last reply Reply Quote 0
                                  • J Offline
                                    JamesG @Forza
                                    last edited by

                                    @Forza said in Epyc VM to VM networking slow:

                                    Would sr-iov with xoa help backup speeds?

                                    If you specify the SR-IOV NIC, it will be wire-speed.

                                    1 Reply Last reply Reply Quote 0
                                    • planedropP Offline
                                      planedrop Top contributor @Forza
                                      last edited by

                                      @Forza The XOA backup performance is more related to processing and not network, at least as I understand it and have tested.

                                      So I don't think you'll see much of a change there.

                                      1 Reply Last reply Reply Quote 0
                                      • olivierlambertO Offline
                                        olivierlambert Vates 🪐 Co-Founder CEO
                                        last edited by

                                        On Intel, the biggest bottleneck ATM is the export speed capability of the Dom0. On AMD, the backup speed is also affected by the lack if equivalent of iPAT in Intel, but it might depends also on other factors (backup repo speed etc.)

                                        planedropP 1 Reply Last reply Reply Quote 1
                                        • planedropP Offline
                                          planedrop Top contributor @olivierlambert
                                          last edited by

                                          @olivierlambert Yeah so far backups have been fast enough to not pose some huge issue though.

                                          IMO if you have a huge VM (many TB) it should just be dealt with on a NAS or something instead of a VHD.

                                          Still glad that qcow2 is coming though!

                                          1 Reply Last reply Reply Quote 0
                                          • First post
                                            Last post