@redakula
Well, this was unfortunately one of the potential outcome. Unfortunately we don't have the hardware to make more "in deep" debug. I will talk to Marek next week (on Xen Summit) about this patch series and if we could expect it eventually fix the issue with Coral TPU.
Will keep you posted.
Posts
-
RE: Coral TPU PCI Passthrough
-
RE: More than 64 vCPU on Debian11 VM and AMD EPYC
@rarturas I'm not sure it's actually doable to run Windows with more than 64 VCPUs. I'm not surprised neither you VM isn't booting when you turn ACPI off.
We're actually in the middle of investigation what's the VCPU limit could be for Windows VM and especially what could be the gap to get 128 VCPUs for Windows. We're most probably will discuss this topic with community on Xen Summit (in a couple of weeks)
Stay tuned!
-
RE: Coral TPU PCI Passthrough
@andSmv
Hello,
I integrated Marek's patch and builded a rpm, so you can install (may be need to force rpm install or extract the xen.gz from rpm and install it manually if you prefer)Obviously there's no guarantee, it'll work in your case. Moreover, I didn't test the patch, so please backup all your data. It should be harmless, but....
Here's the link you can download the rpm (should be operational until the end of the month) https://nextcloud.vates.fr/index.php/s/gd7kMwxHtNEP329
Don't hesitate to ping me if you experience any issue to download/install/... the patched xen.
Hope it helps!
P.S. Be sure you're running 8.3 XCP-ng, as I only uploaded xen hypervisor rpm (and not libs/tools which come within)
-
RE: Coral TPU PCI Passthrough
@redakula Hello, unfortunately these patches are not in 4.17 Xen (and was never integrated in more recent Xen). So, to test it, you have to manually apply patches (normally should apply as is to 4.17) and rebuild your Xen.
-
RE: More than 64 vCPU on Debian11 VM and AMD EPYC
@alexredston Hey, sorry I'm a little bit late here. So, with regard of VCPUs - there's a hardcoded limit of 128 actually in XEN hypervisor. Moreover XEN toolstack (when creating a guest) will check that guest VCPU limit is below physical CPUs available on the platform.
Bypassing the VCPU 128 limit will require some rather important adjustements in XEN Hypervisor (basically the restrictions go with IOREQ server from QEMU and how LAPIC id are affected in XEN). So with the next XEN version this limit could potentially be increased (there's an ongoing work on this).The things you also probably would like to know about this VCPUs limit
- not all of the guests can handle this VCPUs number (e.g. Windows will certainely crash)
- when you gives a VM such a big VCPU number (basically more than 32) the VM can potentially provoke the DoS on the whole platform (this is related how some routines are "serialized" in XEN Hypervisor). So, when you do this - be aware that if your guest is broken, pawned, whatever... your whole platform can potentially become unresponsive.
-
RE: Cannot start VM to which a sata controller and pci nvme is passed through to.
@RAG67958472
Thank you!This seems to be a bug. On some level when mapping NVMe device MMIO frame
ef004
(BAR 4) it reuse the same guest frame where SATA device MMIO frameef138
(BAR 1) is allready mapped. This is failing so the domain is stopped by XEN.I have no idea what part of code is responsible of reusing the same guest frame (gfn) for this mappings (probably in toolstack/QEMU,....).
So at first it will be usefull have whole XEN traces from domU start (If I understand correctly you have XEN start in your traces and also the bug traces from 2 times you tried to launch the domU). Are these the only traces when you launch domU?
The second thing - it would be nice to start XEN in debug mode (normally you have XEN image builded with debug traces activated) Can you please start this image and provide these traces.
I will talk to XEN maintainers to see if the problem was allready reported by users. (The code wich stop the domain didn't change in most recent XEN, but the issue is probably situated in upper layers)
It would be also very usefull to see if the problem is the same with pvh and hvm guests.
-
RE: Cannot start VM to which a sata controller and pci nvme is passed through to.
@RAG67958472
Hmmm, seems to be a bug. There's someting special about machine frame ef004. I suppose it's a MMIO address (PCI bar?). Can you please provide an output of the whole xen log from the beggining withxl dmesg
and also a pci conf space dumplspci -vvv
?
What is weird you can passthrough the both devices individually. -
RE: Nvidia MiG Support
Hello, I'm honestly don't know how Citrix vGPU stuff works, but couple of thoughts on this topic:
If I understand correctly, you say Nvidia use VFIO Linux framework to enable mediated devices which can be exported to guest. The VFIO framework isn't supported by XEN, as VFIO need the presense of IOMMU device managed by IOMMU Linux kernel driver. And XEN doesn't proide/virtualize IOMMU access to dom0 (XEN manages IOMMU by itself, but doesn't offer such access to guests)
Bascally to export SR-IOV virtual function to guest with XEN you don't have to use VFIO, you can just assign the virtual function PCI "bdf" id to guest and normally the guest should see this device.
From what I understand Nvidia user-mode toolstack (scripts & binaries) doesn't JUST create SR-IOV virtual functions, but want to access VFIO/MDEV framework, so all this thing fails.
So may be, you can check if you there's some options with Nvidia tools to just create SR-IOV functions, OR try to run VFIO in "no-iommu" mode (no IOMMU presence in Linux kernel required)
BTW, we working on some project where we are intending to use VFIO with dom0, and so we're implementing the IOMMU driver in dom0 kernel, so it would be interesting to know in the future, if this can help with your case.
Hope this help
-
RE: VM's with around 24GB+ crashes on migration.
It's obviously is not exluded that the issue is related to the memory footprint. Moreover the first warning "complains" about failure on memory allocation. (I suppose that the "receiver" node has enough memory to host the VM).
Normally XEN hasn't limitations on Live Migration 24GB VM. So, it's difficult to say what's the issue here. But clearly there's a possibity that this is a bug in XEN/toolstack... Memory fragmentation on the receiver" node can be an issue too.
You can probably run some different configurations to try to pinpoint this issue.
May be for the start try to migrate a VM when no other VMs are running on the "receiver" node. Also try to migrate a VM with no network connections (as the issue seems to be related to network backend status changes).... -
RE: Weird kern.log errors
Yeah, The HW problem seems to be a good guess.
The track that we can follow here is
xen_mc_flush
kernel function which raises a warning when a multicall (hypercall wrapper) fails. The interesting thing here would be to take a look at XEN traces. You can typexl dmesg
in dom0 to see if XEN tells something more (if it isn't happy on some reason) -
RE: VM's with around 24GB+ crashes on migration.
Hmmm, there's two poblems here (page alloc failure warning and NULL pointer BUG) in context of xenwatch kernel thread and basically both of them happenning when configuring XEN network frontend/backend communications.
Normally this isn't related to memory footprint of the VM, but rather to XEN frontend/backend xenbus communication framework. Does the bugs desappear when you reduce the memory size for the VM and when all others params/environnement are the same?
-
RE: Google Coral TPU PCIe Passthrough Woes
@jjgg Here's the link to
xen.gz
.You need to put it in your
/boot
folder (backup your existent file!) and make sure your grub.cfg is pointing to it.But first: Backup all you want to backup! The patch is totally untested and doesn't apply as is (so I needed to adapt it). Normally not such a big deal and should not do no harm, but... you never know.
I'm also not sure that the issue would be fixed. We unfortunatelly do not have Coral TPU device at Vates, so we can't do the more deep analysis on this. The guy who wrote this patch tried to fix other device.
@exime - this is 4.13.5 XCP-ng patched xen, so there's chances it wouldn't work for you (from what I saw you're running 4.13.4 xen)
Anyway, if we have good news, we'll find the way to fix it for everybody.
-
RE: Google Coral TPU PCIe Passthrough Woes
@jjgg Thank you. Yes the same problem - ept violation.. Look, I'll try to figure out what we can do here. There's a patch that comes from Qubes OS guys that normally shold fix the MSI-x PBA issue (not sure that this is the good fix, but still... worth trying) This patch applies on recent Xen and wasn't accepted yet. I will take a look if it can be easily backported to XCP-ng Xen and come back to you.
-
RE: Google Coral TPU PCIe Passthrough Woes
@jjgg Can you please also post XEN traces after the VM is stopped.
(either inhypervisor.log
or just typexl dmesg
(under root account in your dom0) -
RE: XCP-ng 8.3 public alpha 🚀
@ashceryth XCP-ng is based on Xen 4.13, so I'm quite sure it doesn't handle Intel Hybrid architecture. I'm not even sure there's ongoing efforts on this support in Xen Project Community.
Moreover, after a very quick check, I didn't see the trace of ARM big.LITTLE support in recent XEN.
I think this kind of features needs the profound analysis how exactly is to be mapped on hypervisor based platforms. And I think the response is not obvious at all.
-
RE: PCI Passthrough of Nvidia GPU and USB add-on card
@jevan223 Well, if you confirm it worked well on i440fx that probably the hypothesis is wrong. Whas it kvm-qemu virtualization?
-
RE: PCI Passthrough of Nvidia GPU and USB add-on card
@jevan223 This is not about the real hardware. This is about the emulated chipset offered by QEMU to HVM guests (which is the case with Windows VM)
QEMU actually emulates 2 chipset to its guests
-
i440fx: basic PCI bus with CAM access
-
Q35: enhanced PCI bus with ECAM access (and thus access to PCI-e capabiliites).
The problem is that Q35 is not supported by xen-dependant parts in QEMU code, so only i440fx is emulated for XEN HVM guests. We are actually working to enable Q35 in XEN, but this is a work in progress.
Well, this is a hypothesis which needs to be confirmed, but by the look of a lspci output, there's a good chance that's the reason
-
-
RE: Coral TPU PCI Passthrough
@logical-systems I will check which Xen version the patches are easily applied and If you want I could give you a hand (if needed) to build and install your builded XEN, so you can test if this resolve your issue.
Unfortunatly we don't have the related HW (Coral TPU) to test it by ourselves.
UPDATE: the both patches apply to xen 4.17 (tag RELEASE-4.17.0)
-
RE: PCI Passthrough of Nvidia GPU and USB add-on card
Yes. Some of the PCI capabilities are beyond the "standard" PCI configuration space of 256 bytes per BDF (PCI device). And unfortunatly the "enhanced" configuration access method is not provided yet (it's ongoing work) for HVM guests by XEN. It would require from QEMU (xen related part) the chipset emulation which offers an access to such method, such as Q35.
Very probably, windows drivers for these devices are not happy to not access these fields, so this is potentially the reason of malfunctionning for these devices.
The good way to confirm this would be to try to passthrough these devices to Linux guests, so we could possibly add some extended traces. And possibly passthrough these devices to PVH Linux guest and see how they are handled (PVH guest do not use QEMU for PCI bus emulation)