XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    Socket topology in a pool

    Scheduled Pinned Locked Moved Management
    9 Posts 4 Posters 273 Views 4 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • mauzillaM Offline
      mauzilla
      last edited by mauzilla

      We're finalizing our pool. We have 2 hosts with 4x CPU's and a last host with 2x CPU's (same range, just the sockets).

      This obviously means that if I have VM's that has a topology say of 4 sockets 2 Cores, I will not be able to move the VM's from a 4 socket host to a 2 socket host.

      How does XCP distribute the load? If we change our topology to have all VM's utilize 2 sockets, will only 2 sockets be used (thus socket 3/4 on a 4 socket host will not have any VM's utilize those CPU's) or will XCP still distribute the load to all CPU's but utilize the least busy CPU when booting that VM up?

      ForzaF K 2 Replies Last reply Reply Quote 0
      • ForzaF Offline
        Forza @mauzilla
        last edited by

        @mauzilla I do not think numa is exposed to the guests, so they will only see the number of cores assigned. I. E., you can migrate them just fine.

        1 Reply Last reply Reply Quote 0
        • K Offline
          kagbasi-ngc @mauzilla
          last edited by

          @mauzilla I believe I may have heard thus covered on one of Lawrence Systems' explainer videos - though I cannot recall precisely.

          Here's the video, might prove useful:

          https://youtu.be/Lsi2-hAoKSE?si=nplW-MyZl-4tWT8M

          1 Reply Last reply Reply Quote 0
          • G Offline
            Greg_E
            last edited by

            Why not just set the VM to 1 processor with X cores?

            1 Reply Last reply Reply Quote 0
            • mauzillaM Offline
              mauzilla
              last edited by

              I think I did not ask the question correctly. To simplify, if I have 4 sockets but set my vms up to use 2 sockets x cores, does this mean it will utilize socket 1 and 2 and never 3 and 4? If so, how would my config need to change to say use any sockets available but x cores.

              I'm trying to assess if in certain situations I am not distributing the load of my vms correctly.

              mauzillaM 1 Reply Last reply Reply Quote 0
              • mauzillaM Offline
                mauzilla @mauzilla
                last edited by

                So to further simplify (sorry previous reply was on my phone). I need to state upfront that although I know that we specify the topology / RAM / disk etc for each VM, when it comes down to the RAM and CPU aspect I am a complete novice in the understanding of the underlying technologies work in distributing the resources to the VM's. I know we can set it, but how it works is a new landscape.

                What I am trying to assess is whether there is a possibility of bad design / rollout. If my hypervisor has 4 physical CPU's (or sockets I presume within XOA). Let's say I have 10 cores per socket, so the over simplification that I have in theory 40 cores available for my VM's.

                Say I setup 10 VM's, each with 2 cores. In theory I am allocated / maximum consuming 20 cores. What I am trying to assess is that if I setup my VM's to use only 1 socket (thus my XOA setup is 1 socket 2 cores), is this setting referring to the actual socket / underlying physical CPU 1, or is this a virtualized topology (so the VM is under the impression it has 1 socket?)

                If it is the underlying socket / physical CPU, would this then imply that the physical CPU 2, 3 and 4 would never be utilized because I have my VM's all setup as a 1 socket 2 cores setup? If however my understanding now is incorrect and that the setting of x sockets x cores is simply to give the VM a topology of what it think it has, what benefit is there then for the underlying VM in having different sockets / CPU's if this is simply a virtualized setting?

                G 1 Reply Last reply Reply Quote 0
                • G Offline
                  Greg_E @mauzilla
                  last edited by

                  @mauzilla

                  I tend to set almost all of my VMs to 1 socket with 8 cores. My production system only has one socket per host, so no big deal there. But my lab has 2 sockets per host. I have seen mist of the cores on a host active with multiple VMs configured this way, so the hyper visor must be balancing this in one way or another.

                  One of these days I'll have to try setting a VM for 8 sockets with 1 core each and see what happens.

                  mauzillaM 1 Reply Last reply Reply Quote 0
                  • mauzillaM Offline
                    mauzilla @Greg_E
                    last edited by

                    @Greg_E we're running into an issue at the moment where one one of our hypervisors with 4 sockets, the CPU is approximately 30-50% utilized, but the VM's are battling with CPU usage (or contention rather). Most VM's run fine, but when the CPU comes into play, it's quite obvious that some of the VM's are dramatically slower.

                    I have a mix match of configurations (mostly 1-2 sockets x cores), so I am trying to assess whether there is a configuration issue (where it would be better to specify 4 sockets x cores than 1-2 sockets if you have a 4 socket system)

                    G 1 Reply Last reply Reply Quote 0
                    • G Offline
                      Greg_E @mauzilla
                      last edited by

                      @mauzilla

                      I just read this thread and it might help

                      https://xcp-ng.org/forum/topic/9924/vm-vcpu-allocation

                      I've only ever run my systems with the same processor and configuration in each host, so not sure about your system. The biggest issue with a mixed environment is to keep the processors in the same family/generation, and mixing Intel and AMD in different hosts could cause issues after a migration.

                      1 Reply Last reply Reply Quote 0
                      • First post
                        Last post