@Tristis-Oris Do first verify you have good backups before considering deleting snapshots. You could also just export the snapshots associated with the VMs..
As to the GUI vs. CLI, it should do the same thing, but if it runs, it should show up in the task list.

Posts
-
RE: SR Garbage Collection running permanently
-
RE: SR Garbage Collection running permanently
@Tristis-Oris By manually, do you mean from the CLI vs. from the GUI? If so, then:
xe sr-scan sr-uuid=sr_uuid
Check your logs (probably /var/log/SMlog) and run "xe task-list" to see what, if anything, is active. -
RE: SR Garbage Collection running permanently
@Tristis-Oris It wouldn't hurt to do a manual cleanup. Not sure if a reboot might help, but strange that no task is showing as active. Do you have other SRs on which you can try a scan/coalesce?
Are there any VMs in a weird power state? -
RE: SR Garbage Collection running permanently
@Tristis-Oris Note that if the SR storage device is around 90% or more full, a coalesce may not work. You have to either delete or move then enough storage so that there is adequate free space.
How full is the SR? That said, a coalesce process can take up to 24 hours to complete. I wonder if this shows up and with what progress when you run "xe task-list" ? -
RE: XCP-ng host - Power management
@abudef Yeah, nested virtualization has its own issues. I think it was possible with at least some versions of XenServer, but it's something not well supported.
Changing those parameters also works on native XCP-ng so not sure where the advantages actually lie by putting XCP-ng on top of ESXi? Maybe you can clarify that? -
RE: XCP-ng host - Power management
@Forza As mentioned, for VMs and the OS to be able to leverage such features as turbo mode and C-states, the BIOS has to be set to enable OS control. Without giving such control to the Linux OS, there are indeed various limitations. The uncore parameter must also be set to "dynamic" (OS DBPM) and if not in the BIOS, has to be set via the command:
xenpm set-scaling-governor performance
to put into effect immediately and to be able to be preserved over reboots, the command:
/opt/xensource/libexec/xen-cmdline --set-xen cpufreq=xen:performance
has to be run. This all assumes there have not been significant changes since I last tried all this out, of course, which was over four years ago, but @abudef has older hardware and I would think that would allow for this to be taken care of in the BIOS. To quote from my first article on this topic:
"Red Hat states specifically in the article https://access.redhat.com/articles/2207751 that a server should be set for OS (operating system) performance as otherwise, the operating system (in this case, XenServer) cannot gain access to control the CPU power management, which ties in with the ability to manage also the CPU frequency settings."BTW, the article you reference is now just about ten years old and references Xen kernel 3.4. The latest Xen release is 4.18.
-
RE: XCP-ng host - Power management
@abudef Thanks for your feedback. those should help since those servers appear to be of a similar vintage. Looks like you have 2 CPUs per server, so the memory will be physically split between the interconnects and hence NUMA will play a role. Let me know if you have specific questions after you go through all that information.
As an aside, the activity of the VMs will have a big influence on power consumption, probably more than various BIOS settings. Note that if you want to make
use of turbo mode and C states, you'll need to set the BIOS to OS control. Here's a pretty good thread that discusses Dell BIOS power state settings that
may be useful: https://serverfault.com/questions/74754/dell-poweredge-powersaving-bios-settings-differences
Power settings will only have a minimal effect on power consumption when the server is idle. I ran server that had something like 80 virtual desktop VMs on them set to high performance because during the day, they needed all the power they could get. When the labs closed at night, the power consumption went way down. But it's always best to verify what works or not in your very own environment, as I state many times in my articles!
Best regards,
Tobias -
RE: XCP-ng host - Power management
@abudef Hello! It depends on many factors. Do you have any GPUs? What is the server model and its CPU configuration and specifics? What kind of VMs are you running?
Note that many CPU power settings can and should be performed in the BIOS. See for example this article, and let us know what your server configuration looks like. This article may be of some use to you:
https://community.citrix.com/citrix-community-articles/a-tale-of-two-servers-how-bios-settings-can-affect-your-apps-and-gpu-performance/ -
RE: Some questions about vCPUs and Topology
@jasonnix Yes, a vCPU means a virtual CPU, which is the assignment of a VM to a physical CPU core.
Servers have sockets that contain physical CPUs, so it sounds like your system has four sockets, holding four physical CPUs.
Each physical CPU can have multiple cores and in some cases, one thread per core or in others, two threads per core, but let's stick
to the simpler case here.
A configuration of 4 cores with 1 core per socket means each of the 4 vCPUs will reside on a core on four separate physical CPU sockets,
so all four physical CPUs are accessed. This is in most cases not ldeal as in many servers with 4 physical CPUs, the memory banks are split between pairs of CPUs,
two on one bank, two on the other. Having VMs cross over physical CPU memory bank boundaries is generally inefficient and should
be avoided if possible. This is why NUMA (Non-Uniform Memory Access) and vNUMA become important in the configuration.
And @gskger is correct that licensing can sometimes depend on the configuration.
I should add that under some circumstances, configuring the CPUs for turbo mode can be an advantage.
Suggested reading: my article on the effects of vCPU and GPU allocations in a three-part set of articles. In particular, Part 3 addresses
NUMA and configuration issues and Part 2 discusses turbo mode.
I hope this helps as it is initially quite confusing.
https://community.citrix.com/citrix-community-articles/a-tale-of-two-servers-how-bios-settings-can-affect-your-apps-and-gpu-performance/
https://community.citrix.com/citrix-community-articles/a-tale-of-two-servers-part-2-how-not-only-bios-settings-but-also-gpu-settings-can-affect-your-apps-and-gpu-performance/
https://community.citrix.com/citrix-community-articles/a-tale-of-two-servers-part-3-the-influence-of-numa-cpus-and-sockets-cores-persocket-plus-other-vm-settings-on-apps-and-gpu-performance/ -
RE: VDIs attached to Control Domain
@bpsingh76 You should be able to run "xe vdi-list" on these and see if they are attached to anything. If not, it would appear they are just leftover templates.
-
RE: Rolling Pool Update Failed
@stevewest15 Looks to me like all your hosts might not have the same updates/patches applied. I'd check to make sure they are all up-to-date and the same on all your hosts.
-
RE: All NFS remotes started to timeout during backup but worked fine a few days ago
@Forza That all is good advice. Again, the showmount command is a good utility that cam show you right away if you can see/mount the storage device from your host.
-
RE: All NFS remotes started to timeout during backup but worked fine a few days ago
@CodeMercenary Sorry first off to hear about your personal issues. They need to take precedence.
If there is network contention, it's important that things not be impacted on backups because they utilize a lot of resources: network, memory, and CPU load.
That's why we always ran them over isolated network connections and at times of the day when in general VM ctivity was at a minimum. Make sure you have adequate CPU and memory on your hosts (run top or xtop) and also, iostat (I suggest adding the -x flag) can be very helpful is seeing if other resources are getting maxed out. -
RE: All NFS remotes started to timeout during backup but worked fine a few days ago
@CodeMercenary THis is unfortunate news. We ran backups over NFS successfully for years, but with Dell PowerVault units NFS-mounted on Dell Linux boxes and with XenServer hosted on Dell PowerEdge servers, so everything was Dell which probably made things more compatible.
You don't have some weird firewall or other authentication issue? And is your backup network on a separate physical network or at least a private VLAN?I will also note that some problems we had now and then were due to bad Ethernet cables! Are your network connections to the storage devices bonded or using multipath?
-
RE: All NFS remotes started to timeout during backup but worked fine a few days ago
@CodeMercenary Ouch. Make sure all your servers are properly time synchronized. Can you do a showmount from your server to the NFS storage device to see if the host has access permissions?
-
RE: All NFS remotes started to timeout during backup but worked fine a few days ago
@CodeMercenary How odd, unless the default NFS version somehow changed since it last worked and you had to specify V3 to get it to work again.
I'd contact Synology to find out if perhaps the storage unit needs a firmware upgrade to support V4? Perhaps they've had similar feedback from
other customers. -
RE: All NFS remotes started to timeout during backup but worked fine a few days ago
@CodeMercenary Just a guess, but is there any NFS lock perhaps involved because of the CPU getting pegged and there being a timeout of sorts? Check the lockd daemon state, perhaps.
-
RE: GPU support and Nvidia Grid vGPU
@msupport Many thanks for your write-up! Have you experienced any issues communicating with the NVIDIA license server?