yeah, closed source drivers on linux has always felt so.... dirty...
I'd use AMD if I could, but unfortunately I need NVENC and NVDEC.
yeah, closed source drivers on linux has always felt so.... dirty...
I'd use AMD if I could, but unfortunately I need NVENC and NVDEC.
@xcp-ng-justgreat said in Nvidia Quadro P400 not working on Ubuntu server via GPU/PCIe passthrough:
@thefrisianclause Hello, not sure if I missed something from the above thread, but did any of you try to turn off the CPUID "hypervisor present" bit on an Intel-based XCP-ng host VM using this technique from the thread referenced by @warriorcookie above? https://xcp-ng.org/forum/topic/4643/nested-virtualization-of-windows-hyper-v-on-xcp-ng/26
It is the equivalent of the ESXi Hypervisor.CPUID.v0="FALSE" vmx file configuration tweak. It configures the XCP-ng VM to, in effect, lie to the guest OS by saying, "you are not running on a hypervisor."
Can you clarify? I thought this thread had been left as "close but no cigar"? Seems to have gotten the attention of some xen devs though...
@thefrisianclause Not all Quadro cards. Datacenter class cards will passthrough fine. Workstation cards do not on Linux guests. They will passthrough on windows guests as Nvidia removed the check in the driver.
@olivierlambert that is a certainty. But no amount of bribery or blackmail seems to make them want to let us in on the secret...
@olivierlambert said in Nvidia Quadro P400 not working on Ubuntu server via GPU/PCIe passthrough:
- True Type-1 hypervisor (like ESXi, unlike KVM) makes it more isolated but harder to do things in general
- It's as hard in ESXi, but resources on the hypervisor are 2 or 3 order of magnitude higher than for the Xen project.
Obviously, we are working hard here at Vates to get more people directly involved in the Xen project. But it takes time and a vast amount of money to reach our target Anyway, I'll try to see what I can do with our resources. The main issue for me now on this feature: it's mainly for non-pro usage, so no company will finance that.
I certainly appreciate the challange, and I wish I had something to offer to help development wise.
Perhaps a more "pro" use case could be from the standpoint of nested VM with the likes of HyperV?
@olivierlambert said in Nvidia Quadro P400 not working on Ubuntu server via GPU/PCIe passthrough:
Maybe a better approach would be to modify Nvidia drivers to change or remove the check. After all, it should be only a kind of
grep
on the wordXen
And obviously, this would be also against the EULA.
I've found this but have not had time to play: https://github.com/DualCoder/vgpu_unlock
@olivierlambert said in Nvidia Quadro P400 not working on Ubuntu server via GPU/PCIe passthrough:
It is, but it doesn't answer how much effort is needed to "solve it". ATM, there's no way to change it in Xen. So the great question is the modification scope required
My Pa always said "don't bring me a dead cat without a shovel..."
Sorry, I'm all cats and no shovel today!
Parralel work into what I think could be the solution, but stalled as of now: https://xcp-ng.org/forum/topic/4643/nested-virtualization-of-windows-hyper-v-on-xcp-ng/39
@olivierlambert just wanted to clarify a couple things:
-nvidia drivers support passthrough recently on windows guests only. Linux guests are still unsupported.
-while most quadro cards are supported for passthrough including P4000, the P400 is excluded from support. It's considered consumer grade (GP107 basically a striped down gtx1050)
-might be a given but for those that stumble on this thread, both the video device and audio device need to be passed through separately as they show as separate devices under lspci.
I'm confident this is an issue with the nvidia driver disabling when it see's the CPUID hypervisor present bit.
On my hardware (SM x9 with e5-2667v2) proxmox and esxi both worked flawlessly with near bare metal performance but you're able to change the CPUID hypervisor present bit.
On xcp-ng 8.2 with windows guest it works with no issues.
Once I install Linux as the guest then smi-nvidia produces "no devices were found" even though it shows with lspci.
I'm trying to migrate from esxi. I'm jumping in on this thread as this is a major issue for me that would prevent me switching. Unfortunate as this project ticks all the other boxes that vmware failed to....