@gb.123 You need to do both. After adding the PCI device, you might also need to specifically enable "passthrough" for that device within the VM's settings. It may do it automatically when you add it.
Make sure the appropriate NVIDIA driver is also installed on the the VM.

Posts
-
RE: GPU Passthrough
-
RE: GPU Passthrough
@gb.123 YOu are trying to do passthrough to a specific VM? I don't think that used to be supported, but maybe is now.
Are NVIDIA drivers installed on the VM, as needed?
Sorry, it's been a while since doing this so I'm digging back into my memory.
Also, is IOMMU supported and enabled in the BIOS?
Also, check this out and see if it may be of some help:
https://www.youtube.com/watch?v=_JPmxmxqhds -
RE: GPU Passthrough
@gb.123 Ah, OK. then the more powerful GP is the RTX4060, right? If so, use it for the passthrough. Also, on some CPUs you have to do a BIOS setting to allow this to work, because of memory limitations, but only on probably much older systems, if I recall correctly.
-
RE: GPU Passthrough
@gb.123 You need one video card for your administrative console and another can be used for GPU passthrough. There must be two separate physical devices.
So make sure you have two video boards, one that has the GPU capabilities you want to use in your passthrough configuration. -
RE: Possible for a script on one host to test fr VM runnig on another host?
@archw Just write a shell script and use rsh to securely run the script to query that host for the status of that VM. You make need to add the accessing hosts to /etc/hosts.allow (might be hosts_allow, I can't recall offhand).
See for example: https://linuxconfig.org/hosts-allow-format-and-example-on-linuxThat said, HA is clearly a better option, provided you have a compatible SR available.
-
RE: Intel CPUs with P and E cores or other server CPU suggestions
@MasterOSkillio Typically, C-states need to be changed in the BIOS. In some cases, it can be very helpful. I wrote a few blogs including this topic entitled "A Tale of Two Servers" but cannot readily find them on-line at the moment. Alas, Citrix has purged a lot of still relevant older content over the years.
-
RE: XO one time job scheduler
@RS One option would be this, assuming in this se you want to run the job at midnight on Dec. 25:
/bin/echo "/path/to/your/script.sh" | at midnight Dec 25While cron doesn't offer a specific one-time execution, you could also do this in cron but would have to remove the entry afterwards:
0 0 25 12 * /path/to/your/script.shAlso, take a look at this option: https://www.fastcron.com/guides/one-time-cronjobs/
-
RE: CPU Stats bottoming out to Zero every five minutes
@DKirk That all makes sense, thanks for clarifying. Looks like there are further comments below that seem to pinpoint where the issue may lay. The key point you make is only "after the last updates" is when this started happening!
-
RE: CPU Stats bottoming out to Zero every five minutes
@DKirk Very odd. Maybe a electrical power issue? Do you see this if you run xentop on each host and really important, do they happen at the same time on all your servers?
Any chance they are overheating and pausing briefly? -
RE: How to Re-attach an SR
@olivierlambert Agreed. The Citrix forum used to be very active, but especially since Citrix was taken over, https://community.citrix.com has had way less activity, sadly.
It's still gratifying that a lot of the functionality still is common to both platforms, although as XCP-ng evolves, there will be continually less commonality. -
RE: How to Re-attach an SR
@Chrome Cheers -- always glad to help out. I put in many thousands of posts on the old Citrix XenServer site, and am happy to share whatever knowledge I still have, as long as it's still relevant! In a few years, it probably won't be, so carpe diem!
-
RE: How to Re-attach an SR
@Chrome Fantastic! Please mark my post as helpful if you found it as such. Was traveling much of today, hence the late response.
BTW, it's always good to make a backup and/or archive of your LVM configuration anytime you change it, as the restore option is the cleanest way to deal with connectivity issues if there is some sort of corruption. It's saved my rear end before, I can assure you!
Yeah, if the SSD drive got wiped, there's no option to get those back unless you made a backup somewhere of all that before you installed XCP-ng onto it.
BTW, another very useful command for LVM is "vgchange -ay" which will attempt to renew VG information if a VG seems missing or the like.
-
RE: How to Re-attach an SR
@Chrome As M. Lambert says, you may be able to sue pbd-plug to re-attach the SR if you can sr-introduce the old SR back into the system.
If not, and if your LVM configuration has not been wiped out, here are some steps t try to recover it (it's an ugly process!):- Identify the LVM configuration:
Check for Backups: Look for LVM metadata backups in /etc/lvm/archive/ or /etc/lvm/backup/.
Use vgscan: This command will search for volume groups and their metadata.
Use pvscan: This command will scan for physical volumes.
Use lvs: This command will list logical volumes and their status.
Use vgs: This command will list volume groups. - Restore from Backup (if available):
Find the Backup: Locate the LVM metadata backup file (e.g., /etc/lvm/backup/<vg_name>).
Boot into Rescue Mode: If you're unable to access the system, boot into a rescue environment.
Restore Metadata: Use vgcfgrestore to restore the LVM configuration. - Recreate LVM Configuration (if no backup):
Identify PVs: Use pvscan to list available physical volumes.
Identify VGs: Use vgscan to identify volume groups if they are present.
Recreate PVs: If necessary, use pvcreate to create physical volumes.
Create VGs: If necessary, use vgcreate to create a new volume group.
Create LVs: If necessary, use lvcreate to create logical volumes. - Mount and Verify:
Mount the Logical Volumes: Mount the restored LVM volumes to their respective mount points.
Verify Data: Check the integrity of the data on the restored LVM volumes. - Extend LVM (if adding capacity):
Add a new disk: Ensure the new disk is recognized by the system.
Create PV: Use pvcreate on the new disk.
Add PV to VG: Use vgextend to add the PV to the volume group.
Extend LV: Use lvextend to extend the size of an existing logical volume.
Extend Filesystem: Use e2resize (for ext4) or resize2fs (for ext3) to extend the filesystem on the LV.
- Identify the LVM configuration:
-
RE: How to Re-attach an SR
@Chrome Do then just a "xe vm-list" and see if you recogniize any VMs other than the dom0 instance of XCP-ng.
If there is nothing else showing up, you will need to try to find your other LVM storage. -
RE: How to Re-attach an SR
@Chrome Try "xe vm-list params=all"
Do you only have a local storage or did you have any attached storage that's not showing up? -
RE: How to Re-attach an SR
@Chrome OK, now try "xe sr-introduce" (check the syntax for the full command syntax you need), depending on what your connection type is:
xe sr-introduce uuid=<device uuid> shared=true type=lvmohba name-label=<name>
xe sr-introduce uuid=<device uuid> shared=true type=lvmoiscsi name-label=<name>
xe sr-introduce uuid=<device uuid> shared=true type=nfs name-label=<name>
If you are lucky and the interface still exists by which the SR was attached, that might do the trick.
-
RE: How to Re-attach an SR
No metadata backup? All that info should be contained within. Otherwise, it could be a long, painful process. Are there LVM drives or how were they initially created?
I assume "xe sr-list" shows nothing or? -
RE: Rolling Pool Update - not possible to resume a failed RPU
@Andrew Right, backups should be shut off during the RPU process.
-
RE: Rolling Pool Update - not possible to resume a failed RPU
@manilx If the VMs can be shut down, yes, otherwise migrate the VMs. Luckily, you can migrate from a host with a lower hotfix level to one that has a higher level, but I do not believe the reverse is possible.
-
RE: Rolling Pool Update - not possible to resume a failed RPU
@ecoutinho Performing manual updates is an option. The master probably checked the hosts for the hotfix uniformity before the rolling pool upgrade started, but since it failed to complete, you now have a discrepancy in the patch level for those two hosts. I had that happen once because of a root space error, which was a pain to deal with, and though I cannot recall the specific fix, I think I had to migrate the VMs to the updated hosts and do a whole new install after redoing the partition table (it was that dreaded extra "Dell" partition at the time that caused the issue).