@olivierlambert any idea if Xen 4.17 will be coming to XCP-ng? :]
Posts made by tomg
-
RE: PCI Nvidia GPU Passthrough boot delay
-
RE: PCI Nvidia GPU Passthrough boot delay
@olivierlambert said in PCI Nvidia GPU Passthrough boot delay:
If I understood correctly what I've been told, the "hand time" is in fact mapping (and remapping) the PCIe BAR.
It takes some time to be done, until we (ie the Xen community) got some patches merged in Xen, allowing to enjoy IOMMU superpages, which should turn all of this into a fraction of a second. I don't know the current status of this.
Regarding permissive thing, I don't know I have to ask around too.
Looks like this finally got committed last week!
https://github.com/xen-project/xen/commits/master/xen/drivers/passthrough
@olivierlambert - any idea when this will make it into XCP-ng? :]
-
RE: XCP 8.2 VCPUs-max settings
@sapcode Hi there --
I know this was quite a while ago but I thought I should post here since I ran into the same exact issue when trying to add >64 vCPUs. The issue ended up being an ACPI issue on the VM itself. Things apparently get wonky past 64 CPUs with ACPI, since VMs well don't really need ACPI you can disable and voila, more than 64 vCPUs :]
Just modify your VM's platform param, no need to pass it as a kernel option inside the VM.
xe vm-param-set platform:acpi=0
Hope this helps anyone else trying to run more than 64 vCPUs.
cc @jeff
-
RE: PCI Nvidia GPU Passthrough boot delay
@andyhhp said in PCI Nvidia GPU Passthrough boot delay:
The good news is that IOMMU superpage support is in progress upstream, and should turn this delay into milliseconds.
Assuming this is the work you’re referring to: https://lists.xenproject.org/archives/html/xen-devel/2022-01/msg00277.html
Any idea when this will be committed?
-
RE: Is there something like VMRC (VMware Remote Console) for XCP-ng?
Because it sounds like he's setting up a thin client. No need to have cluster management software running. I understand that you're pushing the use of your project, I get that, it's your baby and I'll be the first to salute you on an amazing job done but he should be made aware of all the possibilities. Knowledge is power.
-
RE: Is there something like VMRC (VMware Remote Console) for XCP-ng?
If you don't want to use mgmt software like Xen Orchestra you could just connect to the VNC console of the remote VM. This used to work via TCP ports but as of of a few years back qemu started using UNIX sockets by default for the VNC console.
You can see the option being passed by qemu at run time
-vnc unix:/var/run/xen/vnc-3,lock-key-sync=off
where 3 is the domain ID of the domU.
SSH supports UNIX socket forwarding but I had a hard time getting it to work with qemu's VNC socket, it could be umask issue but I didn't look further into it. Socat works well, the only caveat being you need to install it on the Xen host. Here is a write up on how to use it
https://www.nico.schottelius.org/blog/tunneling-qemu-kvm-unix-socket-via-ssh/
HTH
-
RE: PCI Nvidia GPU Passthrough boot delay
Trying this on a guest without GPU drivers yields the same delay but no Write-back warning so complete proof that the delay is entirely due to the slow reading of the GPU's BAR. Not that I doubted anyone ;]
Already made complete sense previously since the A100 has such a large BAR in comparison to the others.
@andyhhp can you point me to the work being done upstream around IOMMU superpage support?
-
RE: PCI Nvidia GPU Passthrough boot delay
Thank you both for the reply.
What's interesting is that Xen reports larger IOMMU page size support, not just 4k. So it's just lying to me that these are supported? :]
(XEN) [ 6.844448] Intel VT-d iommu 8 supported page sizes: 4kB, 2MB, 1GB (XEN) [ 6.852537] Intel VT-d iommu 7 supported page sizes: 4kB, 2MB, 1GB (XEN) [ 6.860614] Intel VT-d iommu 6 supported page sizes: 4kB, 2MB, 1GB (XEN) [ 6.868703] Intel VT-d iommu 5 supported page sizes: 4kB, 2MB, 1GB (XEN) [ 6.876790] Intel VT-d iommu 4 supported page sizes: 4kB, 2MB, 1GB (XEN) [ 6.884877] Intel VT-d iommu 3 supported page sizes: 4kB, 2MB, 1GB (XEN) [ 6.892963] Intel VT-d iommu 2 supported page sizes: 4kB, 2MB, 1GB (XEN) [ 6.901040] Intel VT-d iommu 1 supported page sizes: 4kB, 2MB, 1GB (XEN) [ 6.909131] Intel VT-d iommu 0 supported page sizes: 4kB, 2MB, 1GB (XEN) [ 6.917221] Intel VT-d iommu 9 supported page sizes: 4kB, 2MB, 1GB
(XEN) [ 7.495448] HVM: HAP page sizes: 4kB, 2MB, 1GB
I gather from your explanation I guess it won't matter to get permissive mode to work or not...Still wanted to try out of curiosity.
-
PCI Nvidia GPU Passthrough boot delay
Hi all,
I've had great success passing through Nvidia GPUs from Quadros to Teslas and now Ampere. I've had no problems with multiple GPUs as well. However, I cannot get rid of a slight delay on VM start which appears to be triggered by a call to the PCI device which hangs? The timeout varies on GPU model it would seem. It appears to be consistently 20 - 30 seconds per RTX Ampere GPU, about 20 - 25 seconds on Quadros and ~90 seconds on an A100.
What's worse on the A100, it seems the calls are made linear so say I pass through four A100s the wait time to boot will be 4x90s, not optimal.
Here is what this looks like from qemu's perspective on an A4000, for some reason the calls appear staggered on this device so it completes for both devices in 20s.
Apr 5 21:09:40 qemu-dm-2[9040]: Moving to cgroup slice '' Apr 5 21:09:40 qemu-dm-2[9040]: core dump limit: 67108864 Apr 5 21:09:40 qemu-dm-2[9040]: qemu-dm-2: Machine type 'pc-0.10' is deprecated: use a newer machine type instead Apr 5 21:09:40 qemu-dm-2[9040]: char device redirected to /dev/pts/3 (label serial0) Apr 5 21:10:00 qemu-dm-2[9040]: [00:05.0] Write-back to unknown field 0xc4 (partially) inhibited (0x00000000) Apr 5 21:10:00 qemu-dm-2[9040]: [00:05.0] If the device doesn't work, try enabling permissive mode Apr 5 21:10:00 qemu-dm-2[9040]: [00:05.0] (unsafe) and if it helps report the problem to xen-devel Apr 5 21:10:00 qemu-dm-2[9040]: [00:06.0] Write-back to unknown field 0xc4 (partially) inhibited (0x00000000) Apr 5 21:10:00 qemu-dm-2[9040]: [00:06.0] If the device doesn't work, try enabling permissive mode Apr 5 21:10:00 qemu-dm-2[9040]: [00:06.0] (unsafe) and if it helps report the problem to xen-devel
Here is the same for a single A100, as can be seen we just sit there from 06:23:42 to 06:25:17
Apr 5 06:23:42 qemu-dm-4[3823]: Moving to cgroup slice '' Apr 5 06:23:42 qemu-dm-4[3823]: core dump limit: 67108864 Apr 5 06:23:42 qemu-dm-4[3823]: qemu-dm-4: Machine type 'pc-0.10' is deprecated: use a newer machine type instead Apr 5 06:23:42 qemu-dm-4[3823]: char device redirected to /dev/pts/2 (label serial0) Apr 5 06:25:17 qemu-dm-4[3823]: [00:05.0] Write-back to unknown field 0xc4 (partially) inhibited (0x00000000) Apr 5 06:25:17 qemu-dm-4[3823]: [00:05.0] If the device doesn't work, try enabling permissive mode Apr 5 06:25:17 qemu-dm-4[3823]: [00:05.0] (unsafe) and if it helps report the problem to xen-devel
This brings me to PCI permissive mode. I thought, why not try it to see if whatever this call is actually gets made in permissive mode. But, try as I might I cannot get PCI permissive mode to enable on the actual domU.
I've booted dom0 w/
xen-pciback.permissive
and even triedpci=resource_alignment=
which I believe is deprecated. Xen's pci-back tells me permissive mode is oncat /sys/module/xen_pciback/parameters/permissive Y
I even set the mode by hand and verified the device(s) in question show up here
cat /sys/bus/pci/drivers/pciback/permissive 0000:ca:00.0 0000:98:00.0 0000:4b:00.0 0000:31:00.0
Even the kernel says, hey I'm enabling PCI permissive on these devices
[Tue Apr 5 21:05:00 2022] pciback 0000:31:00.0: enabling permissive mode configuration space accesses! [Tue Apr 5 21:05:00 2022] pciback 0000:31:00.0: permissive mode is potentially unsafe! [Tue Apr 5 21:05:00 2022] pciback 0000:4b:00.0: enabling permissive mode configuration space accesses! [Tue Apr 5 21:05:00 2022] pciback 0000:4b:00.0: permissive mode is potentially unsafe! [Tue Apr 5 21:05:00 2022] pciback 0000:ca:00.0: enabling permissive mode configuration space accesses! [Tue Apr 5 21:05:00 2022] pciback 0000:ca:00.0: permissive mode is potentially unsafe! [Tue Apr 5 21:05:00 2022] pciback 0000:98:00.0: enabling permissive mode configuration space accesses! [Tue Apr 5 21:05:00 2022] pciback 0000:98:00.0: permissive mode is potentially unsafe!
I also added
pci_permissive=1
to the VMsother-config
(along with the PCI addresses of course). I even tried something likeother-config:pci=0/0000:ca:00.0,permissive=1
as a wild shot.After all this, the domU still boots with permissive mode disabled
xenopsd-xc: [debug||32 ||xenops] QMP command for domid 2: {"execute":"device_add","id":"qmp-000007-2","arguments":{"driver":"xen-pci-passthrough","id":"pci-pt-ca_00.0","hostaddr":"0000:ca:00.0","permissive":false}}
Has anyone had success enabling permissive mode? Am I missing something. Or speaking about the larger problem, has anyone encountered this weird delay on domU start w/ GPU passthrough?
strikethrough text