I’m encountering major issues getting PCIe GPU passthrough working on my homely setup:
Host hardware & firmware
•	Dell PowerEdge R740 (dual Xeon Gold 6230R)
•	Single NVIDIA RTX A4000 GPU
•	BIOS settings confirmed: Virtualization enabled, IOMMU enabled, “Above 4G Decode” enabled
•	XCP-ng host (dom0) is not using the GPU; both functions (graphics + audio) are assigned to the VM
VM Guest details
•	Guest OS: Arch Linux (also tested Windows)
•	VM config: Both PCI devices (GPU + HDMI/Audio) attached via passthrough
•	On Arch: kernel cmdline includes pcie_aspm=off nvidia.NVreg_EnableGpuFirmware=0
•	On host VM platform flags set: pci-msitranslate=true, pci-power_mgmt=false, device-model=qemu-upstream-compat, UEFI/OVMF enabled
Symptoms
•	On Arch: repeated log entries such as:
NVRM: gpuHandleSanityCheckRegReadError_GM107: Possible bad register read …
NVRM: GSP failed to halt with GFW_BOOT …
RmInitAdapter: Cannot initialize GSP firmware RM
•	On Windows: Device Manager shows “Error 43” for the GPU.
What I’ve done so far
•	Verified IOMMU groups; both functions isolated and passed through correctly
•	Checked FLR support for the GPU: GPU core supports FLR, audio function does not
•	Tried disabling ASPM/power management in host & guest
•	Tried older NVIDIA driver versions (including 510.xx branch)
•	Verified large BARs are present in guest lspci -vvv output
•	Uploaded full dmesg logs + BIOS dump from Redfish for review
Attachments
•	dmesg_errors.log
•	BIOS dump from Redfish[link text]
Request for help
If you have successfully passed through an RTX A4000 on XCP-ng (to Linux or Windows guest), can you share:
•	XCP-ng version, guest OS, driver version
•	VM platform flags (especially any non-default settings)
•	Any custom vBIOS or device reboot/reset tweaks you used
•	Any additional steps you found necessary for stability
I’m pretty much out of ideas at this point and would appreciate any working configurations or suggestions.
dmesg_errors.txt
BIOS_Current_Settings.txt