@Andrew I thought dom0 had to have some sort of graphics card; is this a recent development? My understanding is that dom0 needed one, albeit not necessarily any with GPU capabilities. Thanks in advance for any clarifications.
Posts
-
RE: Does dom0 require a GPU?
-
RE: Does dom0 require a GPU?
@Johny It would help knowing how the two GPUs were configured on the host. Is this host part of a pool or standalone?
-
RE: Can't designate new master on XO source pool
@vaewyn There is an emergency transition to new master xe command. Also, make sure all your hosts are properly time syncronized to each other or there can be pool issues.
Try first:
xe pool-designate-new-master host-uuid=<new-master-uuid>
If that fails, you will need to run on the slave server:
xe pool-emergency-transition-to-master -
RE: Unable to enable High Availability - INTERNAL_ERROR(Not_found)
@olivierlambert Good idea. Also, they should make sure all hosts are at the same update/patch levels, the network is set up properly among the three or more hosts, there is a compatible HA shared storage properly set up, etc.
You folks have a good guide at: https://docs.xcp-ng.org/management/ha/ -
RE: Unable to enable High Availability - INTERNAL_ERROR(Not_found)
@nikade Interesting, as that at some point used to be the case, at least with XenServer!
I stand corrected and learned something new. -
RE: Unable to enable High Availability - INTERNAL_ERROR(Not_found)
Note also that if HA is turned on or off, the host must be restarted for that change to take effect, if I recall correctly.
-
RE: XCP-ng 8.3 and Dell R660 - crash during boot, halts remainder of installer process (bnxt_en?)
@olivierlambert I've always preferred Intel NICs.

-
RE: Force Remove a NFS Storage Repository
@kagbasi-ngc See if this thread can help you out:
https://xcp-ng.org/forum/topic/6618/how-to-remove-this-sr-nfs-storage/ -
RE: XCP-ng 8.3 and Dell R660 - crash during boot, halts remainder of installer process (bnxt_en?)
@umbradark Maybe too obvious, but is your boot configuration set up to be BIOS or EUFI mode?
-
RE: 10gb backup only managing about 80Mb
@nikade Did the same. VLANs are great! We did use separate NICs for iSCSI storage. But the PMI and VMs, traffic was handled easily by the dual 10 GiB NICs, even with several hundred XenDesktop VMs hosted among three servers (typically around 8- VMs per server).
-
RE: 10gb backup only managing about 80Mb
@utopianfish Or look for deals in places like amazon.com or bestbuy.com or even Ebay.com.
-
RE: 10gb backup only managing about 80Mb
@nikade Yeah, that is a far from optimal setup. It will force the data to flow through the management interface before being routed to the storage NICs.
Running iostat and xtop should show the load. A better configuration IMO would be putting the storage NICs on the switch and using a separate network or VLAN for the storage I/O traffic.
Storage I/O optimization takes some time and effort. The type, number, and RAID configuration of your storage device as well as speed of your host CPUs, eize and type of memory, and configuration of your VMs (if NUMA aware, for example) all will play a role. -
RE: GPU Passthrough
@gb.123 Interesting -- alert the XCP-ng team to take a closer look, if they haven't seen this already.
-
RE: GPU Passthrough
@gb.123 I'm sure you can also find some NVIDIA "hoe to" guides that might be helpful. As mentioned before, I've only done server passthrough so that all VMs would get access,
so sorry I can't provide more specifics. You can always try with one or the other option and add the other if it still doesn't work. I'm pretty sure, though, that you do need both enabled.
Keep us posted! -
RE: GPU Passthrough
@gb.123 Are you sure you have the correct drivers installed? Also, check if the GPU is compatible with AMD CPUs -- I only had Intel CPUs, so am not sure if that's an issue or not.
-
RE: GPU Passthrough
@gb.123 You need to do both. After adding the PCI device, you might also need to specifically enable "passthrough" for that device within the VM's settings. It may do it automatically when you add it.
Make sure the appropriate NVIDIA driver is also installed on the the VM. -
RE: GPU Passthrough
@gb.123 YOu are trying to do passthrough to a specific VM? I don't think that used to be supported, but maybe is now.
Are NVIDIA drivers installed on the VM, as needed?
Sorry, it's been a while since doing this so I'm digging back into my memory.
Also, is IOMMU supported and enabled in the BIOS?
Also, check this out and see if it may be of some help:
https://www.youtube.com/watch?v=_JPmxmxqhds -
RE: GPU Passthrough
@gb.123 Ah, OK. then the more powerful GP is the RTX4060, right? If so, use it for the passthrough. Also, on some CPUs you have to do a BIOS setting to allow this to work, because of memory limitations, but only on probably much older systems, if I recall correctly.
-
RE: GPU Passthrough
@gb.123 You need one video card for your administrative console and another can be used for GPU passthrough. There must be two separate physical devices.
So make sure you have two video boards, one that has the GPU capabilities you want to use in your passthrough configuration. -
RE: Possible for a script on one host to test fr VM runnig on another host?
@archw Just write a shell script and use rsh to securely run the script to query that host for the status of that VM. You make need to add the accessing hosts to /etc/hosts.allow (might be hosts_allow, I can't recall offhand).
See for example: https://linuxconfig.org/hosts-allow-format-and-example-on-linuxThat said, HA is clearly a better option, provided you have a compatible SR available.