@abudef Yeah, nested virtualization has its own issues. I think it was possible with at least some versions of XenServer, but it's something not well supported.
Changing those parameters also works on native XCP-ng so not sure where the advantages actually lie by putting XCP-ng on top of ESXi? Maybe you can clarify that?
Ambassadors
People talking about XCP-ng/XO around them
Posts
-
RE: XCP-ng host - Power management
-
RE: XCP-ng host - Power management
@Forza As mentioned, for VMs and the OS to be able to leverage such features as turbo mode and C-states, the BIOS has to be set to enable OS control. Without giving such control to the Linux OS, there are indeed various limitations. The uncore parameter must also be set to "dynamic" (OS DBPM) and if not in the BIOS, has to be set via the command:
xenpm set-scaling-governor performance
to put into effect immediately and to be able to be preserved over reboots, the command:
/opt/xensource/libexec/xen-cmdline --set-xen cpufreq=xen:performance
has to be run. This all assumes there have not been significant changes since I last tried all this out, of course, which was over four years ago, but @abudef has older hardware and I would think that would allow for this to be taken care of in the BIOS. To quote from my first article on this topic:
"Red Hat states specifically in the article https://access.redhat.com/articles/2207751 that a server should be set for OS (operating system) performance as otherwise, the operating system (in this case, XenServer) cannot gain access to control the CPU power management, which ties in with the ability to manage also the CPU frequency settings."BTW, the article you reference is now just about ten years old and references Xen kernel 3.4. The latest Xen release is 4.18.
-
RE: XCP-ng host - Power management
@abudef Thanks for your feedback. those should help since those servers appear to be of a similar vintage. Looks like you have 2 CPUs per server, so the memory will be physically split between the interconnects and hence NUMA will play a role. Let me know if you have specific questions after you go through all that information.
As an aside, the activity of the VMs will have a big influence on power consumption, probably more than various BIOS settings. Note that if you want to make
use of turbo mode and C states, you'll need to set the BIOS to OS control. Here's a pretty good thread that discusses Dell BIOS power state settings that
may be useful: https://serverfault.com/questions/74754/dell-poweredge-powersaving-bios-settings-differences
Power settings will only have a minimal effect on power consumption when the server is idle. I ran server that had something like 80 virtual desktop VMs on them set to high performance because during the day, they needed all the power they could get. When the labs closed at night, the power consumption went way down. But it's always best to verify what works or not in your very own environment, as I state many times in my articles!
Best regards,
Tobias -
RE: XCP-ng host - Power management
@abudef Hello! It depends on many factors. Do you have any GPUs? What is the server model and its CPU configuration and specifics? What kind of VMs are you running?
Note that many CPU power settings can and should be performed in the BIOS. See for example this article, and let us know what your server configuration looks like. This article may be of some use to you:
https://community.citrix.com/citrix-community-articles/a-tale-of-two-servers-how-bios-settings-can-affect-your-apps-and-gpu-performance/ -
RE: Some questions about vCPUs and Topology
@jasonnix Yes, a vCPU means a virtual CPU, which is the assignment of a VM to a physical CPU core.
Servers have sockets that contain physical CPUs, so it sounds like your system has four sockets, holding four physical CPUs.
Each physical CPU can have multiple cores and in some cases, one thread per core or in others, two threads per core, but let's stick
to the simpler case here.
A configuration of 4 cores with 1 core per socket means each of the 4 vCPUs will reside on a core on four separate physical CPU sockets,
so all four physical CPUs are accessed. This is in most cases not ldeal as in many servers with 4 physical CPUs, the memory banks are split between pairs of CPUs,
two on one bank, two on the other. Having VMs cross over physical CPU memory bank boundaries is generally inefficient and should
be avoided if possible. This is why NUMA (Non-Uniform Memory Access) and vNUMA become important in the configuration.
And @gskger is correct that licensing can sometimes depend on the configuration.
I should add that under some circumstances, configuring the CPUs for turbo mode can be an advantage.
Suggested reading: my article on the effects of vCPU and GPU allocations in a three-part set of articles. In particular, Part 3 addresses
NUMA and configuration issues and Part 2 discusses turbo mode.
I hope this helps as it is initially quite confusing.
https://community.citrix.com/citrix-community-articles/a-tale-of-two-servers-how-bios-settings-can-affect-your-apps-and-gpu-performance/
https://community.citrix.com/citrix-community-articles/a-tale-of-two-servers-part-2-how-not-only-bios-settings-but-also-gpu-settings-can-affect-your-apps-and-gpu-performance/
https://community.citrix.com/citrix-community-articles/a-tale-of-two-servers-part-3-the-influence-of-numa-cpus-and-sockets-cores-persocket-plus-other-vm-settings-on-apps-and-gpu-performance/ -
RE: Missing backups
@McHenry Oh, I didn't see you are doing continuous replication.
This tab is only to restore VM from a remote -
RE: Missing backups
@McHenry Hi
You can click on the "Refresh backup list" to see your backups