@JSylvia007 Sorry, I'm really late to this thread, but note that backups can become problematic if the SR is something like 90% or more full. There needs to be some buffer for storage as part of the process. The fact you could copy/clone VMs means your SR is working OK, but backups are a different situation. If need be, you can always migrate VMs to other storage which is evidently what you ended up doing, which frees up extra disk space. Also backups are pretty intensive so make sure you have both enough CPU capacity and memory to handle the load. Finally. a defective SR will definitely cause issues if there are I/O errors, so watch your /var/log/SMlog for any such entries.
Posts
-
RE: Backup Suddenly Failing
-
RE: update: vGPU w NVIDIA Tesla P4
@Aleksander WIth standard install 8.2 and associated drivers, it's reported that support exists for the following:
Tesla M6/M10/M60, P4/P6/P40/P100, V100, T4, A2/A10/A16/A40, and RTX A5000/A6000/6000/8000 series.
The appripriate NVIDIA licensing must of course also be obtained and installed, including a license server. -
RE: Does dom0 require a GPU?
@Andrew I thought dom0 had to have some sort of graphics card; is this a recent development? My understanding is that dom0 needed one, albeit not necessarily any with GPU capabilities. Thanks in advance for any clarifications.
-
RE: Does dom0 require a GPU?
@Johny It would help knowing how the two GPUs were configured on the host. Is this host part of a pool or standalone?
-
RE: Can't designate new master on XO source pool
@vaewyn There is an emergency transition to new master xe command. Also, make sure all your hosts are properly time syncronized to each other or there can be pool issues.
Try first:
xe pool-designate-new-master host-uuid=<new-master-uuid>
If that fails, you will need to run on the slave server:
xe pool-emergency-transition-to-master -
RE: Unable to enable High Availability - INTERNAL_ERROR(Not_found)
@olivierlambert Good idea. Also, they should make sure all hosts are at the same update/patch levels, the network is set up properly among the three or more hosts, there is a compatible HA shared storage properly set up, etc.
You folks have a good guide at: https://docs.xcp-ng.org/management/ha/ -
RE: Unable to enable High Availability - INTERNAL_ERROR(Not_found)
@nikade Interesting, as that at some point used to be the case, at least with XenServer!
I stand corrected and learned something new. -
RE: Unable to enable High Availability - INTERNAL_ERROR(Not_found)
Note also that if HA is turned on or off, the host must be restarted for that change to take effect, if I recall correctly.
-
RE: XCP-ng 8.3 and Dell R660 - crash during boot, halts remainder of installer process (bnxt_en?)
@olivierlambert I've always preferred Intel NICs.

-
RE: Force Remove a NFS Storage Repository
@kagbasi-ngc See if this thread can help you out:
https://xcp-ng.org/forum/topic/6618/how-to-remove-this-sr-nfs-storage/ -
RE: XCP-ng 8.3 and Dell R660 - crash during boot, halts remainder of installer process (bnxt_en?)
@umbradark Maybe too obvious, but is your boot configuration set up to be BIOS or EUFI mode?
-
RE: 10gb backup only managing about 80Mb
@nikade Did the same. VLANs are great! We did use separate NICs for iSCSI storage. But the PMI and VMs, traffic was handled easily by the dual 10 GiB NICs, even with several hundred XenDesktop VMs hosted among three servers (typically around 8- VMs per server).
-
RE: 10gb backup only managing about 80Mb
@utopianfish Or look for deals in places like amazon.com or bestbuy.com or even Ebay.com.
-
RE: 10gb backup only managing about 80Mb
@nikade Yeah, that is a far from optimal setup. It will force the data to flow through the management interface before being routed to the storage NICs.
Running iostat and xtop should show the load. A better configuration IMO would be putting the storage NICs on the switch and using a separate network or VLAN for the storage I/O traffic.
Storage I/O optimization takes some time and effort. The type, number, and RAID configuration of your storage device as well as speed of your host CPUs, eize and type of memory, and configuration of your VMs (if NUMA aware, for example) all will play a role. -
RE: GPU Passthrough
@gb.123 Interesting -- alert the XCP-ng team to take a closer look, if they haven't seen this already.
-
RE: GPU Passthrough
@gb.123 I'm sure you can also find some NVIDIA "hoe to" guides that might be helpful. As mentioned before, I've only done server passthrough so that all VMs would get access,
so sorry I can't provide more specifics. You can always try with one or the other option and add the other if it still doesn't work. I'm pretty sure, though, that you do need both enabled.
Keep us posted! -
RE: GPU Passthrough
@gb.123 Are you sure you have the correct drivers installed? Also, check if the GPU is compatible with AMD CPUs -- I only had Intel CPUs, so am not sure if that's an issue or not.
-
RE: GPU Passthrough
@gb.123 You need to do both. After adding the PCI device, you might also need to specifically enable "passthrough" for that device within the VM's settings. It may do it automatically when you add it.
Make sure the appropriate NVIDIA driver is also installed on the the VM. -
RE: GPU Passthrough
@gb.123 YOu are trying to do passthrough to a specific VM? I don't think that used to be supported, but maybe is now.
Are NVIDIA drivers installed on the VM, as needed?
Sorry, it's been a while since doing this so I'm digging back into my memory.
Also, is IOMMU supported and enabled in the BIOS?
Also, check this out and see if it may be of some help:
https://www.youtube.com/watch?v=_JPmxmxqhds -
RE: GPU Passthrough
@gb.123 Ah, OK. then the more powerful GP is the RTX4060, right? If so, use it for the passthrough. Also, on some CPUs you have to do a BIOS setting to allow this to work, because of memory limitations, but only on probably much older systems, if I recall correctly.