Some questions about vCPUs and Topology
-
Hello,
The vCPU means Virtual CPU. Does 1 vCPU means a physical CPU?
What does 4 sockets with 1 core per socket mean?
What does this mean if I choose 4 vCPUs with a topology of 4 sockets with 1 core per socket?Thank you.
-
@jasonnix Not an expert on this topic, but my understanding is that 1 vCPU can be understood as 1 CPU core or - when available - 1 CPU thread. Since vCPU's are used for resource allocation by the hypervisor, vCPU overprovisioning can make things more complicated.
VMs or memory-sensitive applications sometimes make memory locality (NUMA) decisions based on the topology of the vCPUs (sockets/cores) presented by the hypervisor, which is why you can choose different topologies. You can read more on NUMA affinity in the XCP-ng documentation. In a homelab, you rarely have to worry about this and as a rule of thumb, you can use virtual sockets with 1 core for the amount of vCPUs you need (which is the standard for XCP-ng and VMware ESXi).
Sometimes it still makes sense to keep the number of sockets low and the number of cores high, as sockets or cores can be a licensing metric that determine license costs. However, most manufacturers already take this into account in their terms and conditions.
In a somewhat (over-) simplified form: socket/core topologies can be used in some special scenarios to optimize memory efficiency and performance or to optimize licensing costs.
-
@jasonnix Yes, a vCPU means a virtual CPU, which is the assignment of a VM to a physical CPU core.
Servers have sockets that contain physical CPUs, so it sounds like your system has four sockets, holding four physical CPUs.
Each physical CPU can have multiple cores and in some cases, one thread per core or in others, two threads per core, but let's stick
to the simpler case here.
A configuration of 4 cores with 1 core per socket means each of the 4 vCPUs will reside on a core on four separate physical CPU sockets,
so all four physical CPUs are accessed. This is in most cases not ldeal as in many servers with 4 physical CPUs, the memory banks are split between pairs of CPUs,
two on one bank, two on the other. Having VMs cross over physical CPU memory bank boundaries is generally inefficient and should
be avoided if possible. This is why NUMA (Non-Uniform Memory Access) and vNUMA become important in the configuration.
And @gskger is correct that licensing can sometimes depend on the configuration.
I should add that under some circumstances, configuring the CPUs for turbo mode can be an advantage.
Suggested reading: my article on the effects of vCPU and GPU allocations in a three-part set of articles. In particular, Part 3 addresses
NUMA and configuration issues and Part 2 discusses turbo mode.
I hope this helps as it is initially quite confusing.
https://community.citrix.com/citrix-community-articles/a-tale-of-two-servers-how-bios-settings-can-affect-your-apps-and-gpu-performance/
https://community.citrix.com/citrix-community-articles/a-tale-of-two-servers-part-2-how-not-only-bios-settings-but-also-gpu-settings-can-affect-your-apps-and-gpu-performance/
https://community.citrix.com/citrix-community-articles/a-tale-of-two-servers-part-3-the-influence-of-numa-cpus-and-sockets-cores-persocket-plus-other-vm-settings-on-apps-and-gpu-performance/