XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    XCP-ng host - Power management

    Scheduled Pinned Locked Moved XCP-ng
    11 Posts 4 Posters 603 Views 3 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • abudefA Online
      abudef
      last edited by

      Hi,
      are there any configurable options for a basic power management policy similar to ESXi?

      8b5a139d-f6e1-408a-b37d-222013f4c209-obrazek.png

      d3d7cc13-23ee-4d2f-a008-743dfa984b27-obrazek.png

      tjkreidlT 1 Reply Last reply Reply Quote 0
      • olivierlambertO Offline
        olivierlambert Vates 🪐 Co-Founder CEO
        last edited by

        Question for tjkreidl 😉

        1 Reply Last reply Reply Quote 1
        • tjkreidlT Offline
          tjkreidl Ambassador @abudef
          last edited by tjkreidl

          abudef Hello! It depends on many factors. Do you have any GPUs? What is the server model and its CPU configuration and specifics? What kind of VMs are you running?
          Note that many CPU power settings can and should be performed in the BIOS. See for example this article, and let us know what your server configuration looks like. This article may be of some use to you:
          https://community.citrix.com/citrix-community-articles/a-tale-of-two-servers-how-bios-settings-can-affect-your-apps-and-gpu-performance/

          abudefA 1 Reply Last reply Reply Quote 0
          • abudefA Online
            abudef @tjkreidl
            last edited by

            tjkreidl Hello, I have two identical Dell R630 servers (BIOS 2.19.0) each with a pair of Intel E5-2680 v4 CPUs in my lab where I am testing deployment scenarios. In the lab, performance is not important, but instead I want to minimize power consumption. Ideally to be able to switch performance "profiles" without having to reboot the servers and toggling settings in the BIOS. Thanks for the link to the article, useful tips and advice there!

            tjkreidlT 1 Reply Last reply Reply Quote 0
            • tjkreidlT Offline
              tjkreidl Ambassador @abudef
              last edited by tjkreidl

              abudef Thanks for your feedback. those should help since those servers appear to be of a similar vintage. Looks like you have 2 CPUs per server, so the memory will be physically split between the interconnects and hence NUMA will play a role. Let me know if you have specific questions after you go through all that information.
              As an aside, the activity of the VMs will have a big influence on power consumption, probably more than various BIOS settings. Note that if you want to make
              use of turbo mode and C states, you'll need to set the BIOS to OS control. Here's a pretty good thread that discusses Dell BIOS power state settings that
              may be useful: https://serverfault.com/questions/74754/dell-poweredge-powersaving-bios-settings-differences
              Power settings will only have a minimal effect on power consumption when the server is idle. I ran server that had something like 80 virtual desktop VMs on them set to high performance because during the day, they needed all the power they could get. When the labs closed at night, the power consumption went way down. But it's always best to verify what works or not in your very own environment, as I state many times in my articles!
              Best regards,
              Tobias

              1 Reply Last reply Reply Quote 0
              • ForzaF Offline
                Forza
                last edited by

                Enabling various custates and frequency scaling in BIOS/Firmware can help powr consumption. There is some latency cost as it takes longer time from deeper states.

                Linux also has CPU frequency governors, but I am not sure how the Xen kernel handles this. Remember that dom0 is a VM under Xen, so things aren't as simple as with plain bare-metal Linuxes.

                There's some information here about this:
                https://wiki.xenproject.org/wiki/Xen_power_management

                tjkreidlT 1 Reply Last reply Reply Quote 0
                • tjkreidlT Offline
                  tjkreidl Ambassador @Forza
                  last edited by tjkreidl

                  Forza As mentioned, for VMs and the OS to be able to leverage such features as turbo mode and C-states, the BIOS has to be set to enable OS control. Without giving such control to the Linux OS, there are indeed various limitations. The uncore parameter must also be set to "dynamic" (OS DBPM) and if not in the BIOS, has to be set via the command:
                  xenpm set-scaling-governor performance
                  to put into effect immediately and to be able to be preserved over reboots, the command:
                  /opt/xensource/libexec/xen-cmdline --set-xen cpufreq=xen:performance
                  has to be run. This all assumes there have not been significant changes since I last tried all this out, of course, which was over four years ago, but abudef has older hardware and I would think that would allow for this to be taken care of in the BIOS. To quote from my first article on this topic:
                  "Red Hat states specifically in the article https://access.redhat.com/articles/2207751 that a server should be set for OS (operating system) performance as otherwise, the operating system (in this case, XenServer) cannot gain access to control the CPU power management, which ties in with the ability to manage also the CPU frequency settings."

                  BTW, the article you reference is now just about ten years old and references Xen kernel 3.4. The latest Xen release is 4.18.

                  tjkreidlT 1 Reply Last reply Reply Quote 0
                  • tjkreidlT Offline
                    tjkreidl Ambassador @tjkreidl
                    last edited by

                    abudef -- any progress on this meanwhile?

                    abudefA 1 Reply Last reply Reply Quote 0
                    • abudefA Online
                      abudef @tjkreidl
                      last edited by

                      tjkreidl In the end, everything turned out completely different. In order to make the lab as efficient as possible, we decided to virtualise it. That is, to run XCP-ng hosts as guests on a dedicated hypervisor. This allows us to efficiently turn virtualized hosts on and off, change their CPU counts, etc.

                      The only problem is that the current version of XCP-ng, or the version of Xen used, does not yet support nested virtualization, so the entire XCP-ng virtual lab "sits" on an ESXi hypervisor.

                      tjkreidlT 1 Reply Last reply Reply Quote 0
                      • tjkreidlT Offline
                        tjkreidl Ambassador @abudef
                        last edited by

                        abudef Yeah, nested virtualization has its own issues. I think it was possible with at least some versions of XenServer, but it's something not well supported.
                        Changing those parameters also works on native XCP-ng so not sure where the advantages actually lie by putting XCP-ng on top of ESXi? Maybe you can clarify that?

                        abudefA 1 Reply Last reply Reply Quote 0
                        • abudefA Online
                          abudef @tjkreidl
                          last edited by

                          tjkreidl We don't need performance, but we do need to test how XCP-ng pools, networking, migration, live migration, backup, import from VMware and so on work. It's just a playground where we can have relatively many XCP-ng hosts, but it's not about performance, it's about efficiency and low requirements, because it's just a playground where we learn, validate how things work, and prepare the process for the final migration from VMware to XCP-ng. We originally had two R630s ready for this, then 4, but that would have been unnecessary, given the power consumption, to have physical hypervisors, so in the end we decided to virtualize it all. Well, on ESXi it's because XCP-ng works seamlessly there in nested virtualization.

                          1 Reply Last reply Reply Quote 1
                          • First post
                            Last post