@chicagomed said in Passthrough Contention Problems with Console and Linux VM:
@TeddyAstie great will take a look this weekend. Is there anything in particular you want us to test / check out?
What works/doesn't work and overall performance.
@chicagomed said in Passthrough Contention Problems with Console and Linux VM:
@TeddyAstie great will take a look this weekend. Is there anything in particular you want us to test / check out?
What works/doesn't work and overall performance.
@DustinB said in Passthrough Contention Problems with Console and Linux VM:
@olivierlambert yes please
It's something we're still evaluating, and there are known bugs (including some for unknown reasons).
There are still rough edges, not everything works perfectly; but it's somewhat there.
What works (tested) :
What doesn't work :
To test it (not production ready)
# Download repo file for XCP-ng 8.3
wget https://koji.xcp-ng.org/repos/user/8/8.3/xcpng-users.repo -O /etc/yum.repos.d/xcpng-users.repo
yum update --enablerepo=xcp-ng-tae1
you should see virtiovga packages for QEMU and Xapi.
Then, you set virtio-vga for your VM.
xe vm-param-set uuid=GUEST_UUID platform:vga=virtio
(or vm-param-add depending on whether or not the parameter has been set previously)
Regarding Windows support; KVM VirtIO drivers provides a "virito-gpu DoD" which should work (tested with upstream Xen, but not yet with XCP-ng).
Hello,
Make sure Intel VMD is disabled (this is the hardware RAID feature of Intel, and it doesn't currently work on XCP-ng; you probably don't need it unless you are looking to make a RAID). We found some modern platforms enabling by default (which also causes issues with Windows).
@emuchogu-0 said in AI on XCP‑ng 8.3: Not Ready for Prime Time? FlashAttention/ROCm Passthrough Stalls vs Proxmox & Bare‑Metal:
xl dmesg (dom0) and dom0 dmesg
Guest dmesg filtered for amdgpu, rocm, hsa, xnack, pasid, iommu, fault messages
Guest lspci -vv for the GPU (MSI/MSI-X state, BARs)
rocminfo from the guest
Minimal reproducer scripts for llama.cpp and ollama (FlashAttention on/off)
You need to provide this information; we can't blind guess where something is failing.
@john.c said in XCP-ng 8.3 updates announcements and testing:
I don’t have AMD based hosts for XCP-ng. However may I suggest an additional validation test of this change, against Debian 13 when stable is released during or following tomorrow. I recon it should work - newer Linux Kernel version 6.12 series, though can’t be sure! Best check to avoid nasty surprises.
The performance fix support is related to the kernel version. All kernel >= 5.19 work with it (or that have https://lore.kernel.org/all/20220530082634.6339-1-jgross@suse.com/), this includes Debian 13.
16 EIB is pretty close (1 byte close) to 18446744073709551615 bytes, which is the maximum representable 64-bits number.
Unikraft would a a good fit for ram-constained devices.
Being able to have useful VMs with 32-64 MB each.
@deefdragon said in VM UUID via dmidecode does not match VM ID in xen-orchestra:
Out of curiosity, I dumped the DMI into a bin and opened it up in a hex editor.
I am seeing ASCII of the ID, but also a variant encoded in binary. In both cases, its formatted as
0b08f477-491a-a982-23c4-d224723624ea
.I believe the ASCII version is the one that gets populated into the serial number as it comes after ASCII encoded versions of the 3 lines above it in the decode.
In SMBIOS 2.8, the UUID is supposed to be encoded in little endian (i.e Microsoft GUID). Yet it is put as big endian instead. So when Linux generates the UUID string from the SMBIOS table, it is considered as little endian which causes this mismatch.
SMBIOS 2.4 is supposed to be used (which appears to be using big endian UUIDs), but for some reason, something in XCP-ng UEFI supports forces it to be SMBIOS 2.8.
So the binary UUID is the same, just that it is interpreted with a different endianness due to accidental format change.
@lovvel from a software standpoint, this is a 16 cores CPUs and AFAICT, Xen doesn't account for these slight differences between cores.
As to be fair, it's not really easy to know in practice if a 3D-VCache core will be faster than a non-3D-VCache one for a specific case.
@deefdragon can you provide us the output of dmidecode
in the guest ?
deef@k31-w-3bfbbe:~$ sudo cat /sys/devices/virtual/dmi/id/product_serial
0b08f477-491a-a982-23c4-d224723624ea
deef@k31-w-3bfbbe:~$ sudo cat /sys/devices/virtual/dmi/id/product_uuid
77f4080b-1a49-82a9-23c4-d224723624ea
deef@k31-w-3bfbbe:~$ sudo cat /sys/hypervisor/uuid
0b08f477-491a-a982-23c4-d224723624ea
It looks like a endianness issue (0b08f477 vs 77f4080b).
@hoh said in Early testable PVH support:
Then I tried to get rid of the (fake) UEFI magic.
Well, it is actually a full standard UEFI implementation, but that works in PVH instead of HVM.
I thought it should work to just change the PV-bootloader to pygrub. Calling pygrub on the disk image works fine and is able to extract the images and args
# pygrub -l alpine.img Using <class 'grub.GrubConf.Grub2ConfigFile'> to parse /boot/grub/grub.cfg title: Alpine, with Linux virt root: None kernel: /boot/vmlinuz-virt args: root=UUID=4c6dcb06-20ff-4bcf-be4d-cb399244c4c6 ro rootfstype=ext4 console=hvc0 initrd: /boot/initramfs-virt
But starting the VM fails. It looks like it starts but then immediately something calls force shutdown, I'll dive deeper into the logs later.
But setting everything manually actually works. If extract the kernel and initrd to dom-0 and configure
PV-kernel=/var/lib/xcp/guest/kernel PV-ramdisk=/var/lib/xcp/guest/ramdisk PV-args="root=/dev/xvda1 ro rootfstype=ext4 console=hvc0"
it boots and I looks pretty much the same as with the pvh-ovmf magic. So perhaps the idea to use pygrub is wrong.
I don't know how good is supported pygrub nowadays, especially since PV support got deprecated in XCP-ng 8.2 then completely dropped in XCP-ng 8.3; with the pv-shim (pv-in-pvh) being the only remaining (but not endorsed) way of booting some PV guests today.
In my tests, pygrub was very clunky and rarely work as I expect. In practice (what upstream Xen Project mostly uses), it got replaced with pvgrub/pvhgrub and pvh-ovmf (OvmfXen) which are more reliable and less problematic security-wise (runs in the guest rather than in the dom0).
(for using pvhgrub, you need to set a pvhgrub binary (grub-i386-xen_pvh.bin
which is packaged by some distros like Alpine Linux's grub-xenhost
) as kernel like done with pvh-ovmf)
@hoh said in Early testable PVH support:
@TeddyAstie said in Early testable PVH support:
... PV-kernel=/var/lib/xcp/guest/pvh-ovmf.elf
Works fine. But IIUC, direct kernel boot should work as well. I tried setting pygrub, the VM loads the kernel and starts but then immediately stops. Any idea what's wrong?
What are you trying to boot ?
cc @andrew
It looks like an issue with https://github.com/xcp-ng-rpms/r8125-module, though I am not completely sure what is going on, and why the pagetable suddently gets invalid.
@gb.123 said in XCP-ng 8.3 updates announcements and testing:
Here is the summary:
If USB Keyboard & Mouse is passed-through along-with GPU:
The GPU gets stuck in D3 state (on Shutdown/Restart of VM) (Classic GPU reset problem)If no vUSB is passed but GPU is passed through:
The GPU works correctly and resets correctly (on Shutdown/Restart of VM)
I have no clue what vUSB may change regarding GPU passthrough.
When I run :
$> lspci
Extract of Output (Partial):07:00.0 USB controller: Advanced Micro Devices, Inc. [AMD] Device 15b8
However, this controller does not show up when I run :
xe pci-listIs it a bug that lspci & xe pci-list have different number of devices ?
How can I pass this controller since xe pci-list does not show it so I can't get the UUID ?
Will kernel parameters (like XCP-ng 8.2) work in this case ?
Question for @Team-XAPI-Network regarding the filtering on PCI IDs.
I don't think XAPI allows using arbitrary BDF, but I may be wrong.
Is it safe to run on XCP-ng host ?
echo 1 > /sys/bus/pci/rescan
(I'm trying to find a way where the PCI card is reset by the host without complete reboot, though I am aware that the above command will not reset it.)
Probably. But it's not going to change anything as the device doesn't completely leave the Dom0 when passed-through.
FYI a function-level-reset is systematically performed by Xen when doing PCI passthrough, thus your device should be reset before entering another guest (aside reset bugs like you may have).
Also is it advisable to use :
xl pci-assignable-add 07:00.0
in XCP-ng 8.3 ? or is this method deprecated ?
I don't think XAPI supports this PCI passthrough approach.
This is a command which allows dynamically to remove a device from Dom0 and put it into "quarantine domain", so that it will be ready to passthrough it.
Current XAPI uses the approach of having a set of "passthrough-able" devices at boot time by modifying the xen-pciback.hide
kernel parameter, which does the same but at boot time.
Not a Xen issue.
This seems to be either a configuration issue (knowing /opt/xensource/libexec/xen-cmdline --get-dom0
may help) causing a issue in XAPI (@Team-XAPI-Network).
Maybe crashing in xapi/pciops.ml#L71-L80 or xapi/xapi_pci_helpers.ml#L179-L207.
@Fionn with NIC passthrough, the network card is fully controlled by the guest, so the host cannot do anything with it anymore.
If you need to setup something for this network card (e.g MAC spoofing), it has to be done from within the guest.
@Forza said in Epyc VM to VM networking slow:
olivierlambert said in Epyc VM to VM networking slow:
If we become partners officially, we'll be able to have more advanced accesses with their teams. I still have hope, it's just that the pace isn't on me.
Hi, is there anything new to report on this? We have very powerful machines, but unfortunately limited by this stubborn issue.
Can you test https://xcp-ng.org/forum/topic/10862/early-testable-pvh-support ?
We observe very significant improvements on AMD EPYC with PVH.
We're still pin-pointing the issue with HVM, the current hypothesis is a issue regarding memory typing (grant-table accessed as uncacheable(UC) which is very slow) related to grant-table positionning in HVM.
Hello !
Xen supports 3 virtualization modes, PV (deprecated), HVM (used in XCP-ng) and PVH.
While HVM is supported in XCP-ng (and used), PVH hasn't been integrated yet, but today in XCP-ng 8.3 we have some early support for it.
The PVH mode has been officially introduced in Xen 4.10 as leaner, simpler variant of HVM (it was initially named HVM-lite) with little to no emulation, only PV devices, and less overall complexity.
It aims to be a great and simpler alternative to traditional HVM for modern guests.
A quick comparison of all modes
PV mode :
HVM mode :
PVH mode :
Keep in mind that this is very experimental and not officially supported.
While XCP-ng 8.3 actually has support for PVH, due to a XAPI bug, you will not be able to access the guest console. I provide a patched XAPI with a patched console.
# Download repo file for XCP-ng 8.3
wget https://koji.xcp-ng.org/repos/user/8/8.3/xcpng-users.repo -O /etc/yum.repos.d/xcpng-users.repo
# You may need to update to testing repositories.
yum update --enablerepo=xcp-ng-testing
# Installing the patched XAPI packages (you should see `.pvh` XAPI packages)
yum update --enablerepo=xcp-ng-tae2
This is optional, but you probably want that to see what's going on in your guest without having to rely on SSH or xl console
.
You can convert any guest into a PVH guest by modifying its domain-type
parameter.
xe vm-param-set uuid={UUID} domain-type=pvh
And revert this change by changing it back to HVM
xe vm-param-set uuid={UUID} domain-type=hvm
You also need a PVH-specific OVMF build that can be used to boot the guest in UEFI mode.
Currently, there is no package available for getting it, but I provide a custom-built OVMF with PVH support
https://nextcloud.vates.tech/index.php/s/L8a4meCLp8aZnGZ
You need to place this file in the host as /var/lib/xcp/guest/pvh-ovmf.elf
(create all missing parents).
Then sets it as PV-kernel
xe vm-param-set uuid={UUID} PV-kernel=/var/lib/xcp/guest/pvh-ovmf.elf
Once done, you can boot your guest as usual.
On many Linux distros, you need to add console=hvc0
in the cmdline, otherwise, you may not have access to a PV console.
Hello,
Unfortunately, the current approach of ballooning (dynamic memory) cannot work with PCI passthrough. I don't think it is possible to workaround that limitation (at least not in XCP-ng 8.3).
If I adjust dynamic to be 48 GiB/48 GiB the machine will then boot. Once booted, I can then once again apply the desired dynamic config of 16 GiB/48 GiB.
Am I misunderstanding the configuration options and this is just not supported, or have I stumbled across a bug?
What's probably happening is that the dynamic configuration you set is not effective yet, and only applies when you reboot, that's why you got PCI passthrough work because you actually used static memory allocation.