@bpsingh76 You should be able to run "xe vdi-list" on these and see if they are attached to anything. If not, it would appear they are just leftover templates.
Posts
-
RE: VDIs attached to Control Domain
-
RE: Rolling Pool Update Failed
@stevewest15 Looks to me like all your hosts might not have the same updates/patches applied. I'd check to make sure they are all up-to-date and the same on all your hosts.
-
RE: All NFS remotes started to timeout during backup but worked fine a few days ago
@Forza That all is good advice. Again, the showmount command is a good utility that cam show you right away if you can see/mount the storage device from your host.
-
RE: All NFS remotes started to timeout during backup but worked fine a few days ago
@CodeMercenary Sorry first off to hear about your personal issues. They need to take precedence.
If there is network contention, it's important that things not be impacted on backups because they utilize a lot of resources: network, memory, and CPU load.
That's why we always ran them over isolated network connections and at times of the day when in general VM ctivity was at a minimum. Make sure you have adequate CPU and memory on your hosts (run top or xtop) and also, iostat (I suggest adding the -x flag) can be very helpful is seeing if other resources are getting maxed out. -
RE: All NFS remotes started to timeout during backup but worked fine a few days ago
@CodeMercenary THis is unfortunate news. We ran backups over NFS successfully for years, but with Dell PowerVault units NFS-mounted on Dell Linux boxes and with XenServer hosted on Dell PowerEdge servers, so everything was Dell which probably made things more compatible.
You don't have some weird firewall or other authentication issue? And is your backup network on a separate physical network or at least a private VLAN?I will also note that some problems we had now and then were due to bad Ethernet cables! Are your network connections to the storage devices bonded or using multipath?
-
RE: All NFS remotes started to timeout during backup but worked fine a few days ago
@CodeMercenary Ouch. Make sure all your servers are properly time synchronized. Can you do a showmount from your server to the NFS storage device to see if the host has access permissions?
-
RE: All NFS remotes started to timeout during backup but worked fine a few days ago
@CodeMercenary How odd, unless the default NFS version somehow changed since it last worked and you had to specify V3 to get it to work again.
I'd contact Synology to find out if perhaps the storage unit needs a firmware upgrade to support V4? Perhaps they've had similar feedback from
other customers. -
RE: All NFS remotes started to timeout during backup but worked fine a few days ago
@CodeMercenary Just a guess, but is there any NFS lock perhaps involved because of the CPU getting pegged and there being a timeout of sorts? Check the lockd daemon state, perhaps.
-
RE: GPU support and Nvidia Grid vGPU
@msupport Many thanks for your write-up! Have you experienced any issues communicating with the NVIDIA license server?
-
RE: GPU support and Nvidia Grid vGPU
@msupport Please write up all the steps involved, as this would be very useful documentation for anyone else wanting to accomplish this. Many have delayed switching to XCP-ng because of not being able to make use of NVIDIA GPUs.
-
RE: CPU Provisioning
@Kajetan321 You should consider the effects of NUMA/vNUMA in terms of performance. Crossing over to other physical CPUs or memory banks will create slowdowns.
If possible, unless you absolutely need all those VCPUs on any or both VMs, you may be best off splitting the 36 VCPUs and assigning 18 on each VM, checking to see if all the memory stays with the bank(s) of those specific VCPUs. See my articles on the CUCG site about NUMA and in particular, graphics performance (and of course, calculations will also be affected). Best would be to
run benchmarks with both configurations. Shrinking the number of VCPUs on a running VM can lead to issues (as indicated in the video posted just before my response).
See the Tale of Two Servers articles here for details: https://community.citrix.com/profile/46909-tobias-kreidl/#= -
RE: XOSAN, what a time to be alive!
@mauzilla According to the documentation, I do not see any way to exclude a host. XOSAN is expected to be installed on every host in the pool.
However, it's not clear what would happen if a host did not have any local storage or alternatively, if the size of the XOSAN storage exceeds the size of the local hosts storage on any given host.
My guess is that the assumption is that ll nodes in the pool will be very similar and hence, the process is simplified because of the uniformity of storage on all the hosts.
For more on the storage creation process, see: https://xen-orchestra.com/docs/xosan.html#creation -
RE: Is backing up a VM by SR possible?
@mjr99 AFAIK, you'd have to run a script to determine what VMs are resident on which SRs and then handle that output accordingly to create a backup procedure.
You can check what SRs are in use for a VM with CLI commands like the ones discussed here:
https://support.citrix.com/article/CTX217612/how-to-find-the-disk-associated-to-a-vm-from-xenserver-cliWhat happens if you have a VM with its storage on more than one SR? I guess you could prioritize what backup it belongs to, but
would not want that VM in more than one list. Again, something that could be handled via scripting. -
RE: Problems with existing pool, problems migrating to new pool
@simpleisbest Yeah, good that you got that host ejected causing the issues. Note that any data on a local SR will have been lost when you eject a host from a pool. G;ad to see the coalescing kicking in at last! A coalesce can take a very long time.
-
RE: Problems with existing pool, problems migrating to new pool
@simpleisbest That should work. It appears a host is offline, not a VM, though, or? I cannot imagine a VM being offline would affect the coalesce.
What is associated with UUID=a60d10b4-b4d0-4bae-a5ab-f8f6b9c03ca8 ?
I gather that "xe task-iist" no longer showed the coalesce process as being active? -
RE: Problems with existing pool, problems migrating to new pool
@simpleisbest Seems like you just need to go in and delete some of the snapshot chain instances. Make sure to of course keep the base and the latest one.
Coalescing has nothing to do with bringing up or down VMs; it's often just a matter of if the SR is too full (over about 90% capacity). Doing an "xe sr-scan uuid=(UUID-of-SR)" should trigger a coalesce which you can verify with "xe task-list."
-
RE: Some HA Questions Memory Error, Parallel Migrate, HA for all VMs,
@nikade No, but I've done a lot -- probably one or two dozen -- when doing updates to help speed up the evacuation of hosts. You can check the queue with
"xe task-list" to see what's being processed or queued. -
RE: Some HA Questions Memory Error, Parallel Migrate, HA for all VMs,
@nikade And if you queue up more than three migration instances, my experience has been that then they are processed such that no more than three run concurrently.
-
RE: Some HA Questions Memory Error, Parallel Migrate, HA for all VMs,
@nikade In XenServer at least, I thought the limit was three VMs being able to be migrated in parallel, according to this:
https://docs.xenserver.com/en-us/xencenter/current-release/vms-relocate.html