Early testable PVH support
-
Hello !
Xen supports 3 virtualization modes, PV (deprecated), HVM (used in XCP-ng) and PVH.
While HVM is supported in XCP-ng (and used), PVH hasn't been integrated yet, but today in XCP-ng 8.3 we have some early support for it.The PVH mode has been officially introduced in Xen 4.10 as leaner, simpler variant of HVM (it was initially named HVM-lite) with little to no emulation, only PV devices, and less overall complexity.
It aims to be a great and simpler alternative to traditional HVM for modern guests.A quick comparison of all modes
PV mode :- needs specific guest support
- only PV devices (no legacy hardware)
- relies on PV MMU (less efficient than VT-x EPT/AMD-V NPT overall, but works without virtualization technologies)
- unsafe against Spectre-style attacks
- supports: direct kernel boot, pygrub
- deprecated
HVM mode :
- emulate a real-behaving machine (using QEMU)
- including legacy platform hardware (IOAPIC, HPET, PIT, PIC, ...)
- including (maybe legacy) I/O hardware (network card, storage ...)
- some can be disabled by the guest (PVHVM), but they exist at the start of the guest
- relies on VT-x/AMD-V
- traditional PC boot flow (BIOS/UEFI)
- optional PV devices (opt-in by guest; PVHVM)
- performs better than PV mode on most machines
- compatible with pretty much all guests (including Windows and legacy OS)
PVH mode :
- relies on VT-x/AMD-V (regarding that, on the Xen side, it's using the same code as HVM)
- minimal emulation (e.g no QEMU), way simpler overall, lower overhead
- only PV devices
- support : direct kernel boot (like PV), PVH-GRUB, or UEFI boot (PVH-OVMF)
- needs guest support (but much less intrusive than PV)
- works with most Linux distros and most BSD; doesn't work with Windows (yet)
Installation
Keep in mind that this is very experimental and not officially supported.
PVH vncterm patches (optional)
While XCP-ng 8.3 actually has support for PVH, due to a XAPI bug, you will not be able to access the guest console. I provide a patched XAPI with a patched console.
# Download repo file for XCP-ng 8.3 wget https://koji.xcp-ng.org/repos/user/8/8.3/xcpng-users.repo -O /etc/yum.repos.d/xcpng-users.repo # You may need to update to testing repositories. yum update --enablerepo=xcp-ng-testing # Installing the patched XAPI packages (you should see `.pvh` XAPI packages) yum update --enablerepo=xcp-ng-tae2
This is optional, but you probably want that to see what's going on in your guest without having to rely on SSH or
xl console
.Making/converting into a PVH guest
You can convert any guest into a PVH guest by modifying its
domain-type
parameter.xe vm-param-set uuid={UUID} domain-type=pvh
And revert this change by changing it back to HVM
xe vm-param-set uuid={UUID} domain-type=hvm
PVH OVMF (boot using UEFI)
You also need a PVH-specific OVMF build that can be used to boot the guest in UEFI mode.
Currently, there is no package available for getting it, but I provide a custom-built OVMF with PVH support
https://nextcloud.vates.tech/index.php/s/L8a4meCLp8aZnGZYou need to place this file in the host as
/var/lib/xcp/guest/pvh-ovmf.elf
(create all missing parents).
Then sets it as PV-kernelxe vm-param-set uuid={UUID} PV-kernel=/var/lib/xcp/guest/pvh-ovmf.elf
Once done, you can boot your guest as usual.
Tested guests
On many Linux distros, you need to add
console=hvc0
in the cmdline, otherwise, you may not have access to a PV console.- Alpine Linux
- Debian
Known limitations
- Some stats shows "no stats" (XAPI bug ?)
- No support for booting from ISO, you can workaround this by importing your iso as a disk and using it as read-only disk
- No live migration support (or at least, don't expect it to work properly)
- No PCI passthrough support
- No actual display (only PV console)
-
T TeddyAstie referenced this topic on
-
Teddy, can the packages from your repo be installed alongside the 8.3 updates released yesterday and the 8.3 security updates released shortly before that?
-
Ping @TeddyAstie
-
Helo!
in bios mode, how can I boot a PVH VM ? -
Since the xcp-ng 8.3 update, there is again no console for PVH VMs, neither XO nor XO Lite.
-
@bogikornel This is a known issue. The fix has been merged upstream and the plan is to integrate it into xcp-ng in the coming months.
-
@TeddyAstie said in Early testable PVH support:
... PV-kernel=/var/lib/xcp/guest/pvh-ovmf.elf
Works fine. But IIUC, direct kernel boot should work as well. I tried setting pygrub, the VM loads the kernel and starts but then immediately stops. Any idea what's wrong?
-
@hoh said in Early testable PVH support:
@TeddyAstie said in Early testable PVH support:
... PV-kernel=/var/lib/xcp/guest/pvh-ovmf.elf
Works fine. But IIUC, direct kernel boot should work as well. I tried setting pygrub, the VM loads the kernel and starts but then immediately stops. Any idea what's wrong?
What are you trying to boot ?
-
@TeddyAstie I'm just playing with it. I installed a fresh Alpine Linux 3.21 (6.x kernel) into a (normal) HVM VM and configured it to boot using grub from UEFI (EFI partition contains just grub, kernel and initrd is inside /boot on the root partition). Works as expected.
Then I changed it to PVH using your instructions, ie. set domain-type=pvh and PV-kernel=/var/lib/xcp/guest/pvh-ovmf.elf. Again, works fine.
Then I tried to get rid of the (fake) UEFI magic. I thought it should work to just change the PV-bootloader to pygrub. Calling pygrub on the disk image works fine and is able to extract the images and args
# pygrub -l alpine.img Using <class 'grub.GrubConf.Grub2ConfigFile'> to parse /boot/grub/grub.cfg title: Alpine, with Linux virt root: None kernel: /boot/vmlinuz-virt args: root=UUID=4c6dcb06-20ff-4bcf-be4d-cb399244c4c6 ro rootfstype=ext4 console=hvc0 initrd: /boot/initramfs-virt
But starting the VM fails. It looks like it starts but then immediately something calls force shutdown, I'll dive deeper into the logs later.
But setting everything manually actually works. If extract the kernel and initrd to dom-0 and configure
PV-kernel=/var/lib/xcp/guest/kernel PV-ramdisk=/var/lib/xcp/guest/ramdisk PV-args="root=/dev/xvda1 ro rootfstype=ext4 console=hvc0"
it boots and I looks pretty much the same as with the pvh-ovmf magic. So perhaps the idea to use pygrub is wrong.
-
@hoh said in Early testable PVH support:
Then I tried to get rid of the (fake) UEFI magic.
Well, it is actually a full standard UEFI implementation, but that works in PVH instead of HVM.
I thought it should work to just change the PV-bootloader to pygrub. Calling pygrub on the disk image works fine and is able to extract the images and args
# pygrub -l alpine.img Using <class 'grub.GrubConf.Grub2ConfigFile'> to parse /boot/grub/grub.cfg title: Alpine, with Linux virt root: None kernel: /boot/vmlinuz-virt args: root=UUID=4c6dcb06-20ff-4bcf-be4d-cb399244c4c6 ro rootfstype=ext4 console=hvc0 initrd: /boot/initramfs-virt
But starting the VM fails. It looks like it starts but then immediately something calls force shutdown, I'll dive deeper into the logs later.
But setting everything manually actually works. If extract the kernel and initrd to dom-0 and configure
PV-kernel=/var/lib/xcp/guest/kernel PV-ramdisk=/var/lib/xcp/guest/ramdisk PV-args="root=/dev/xvda1 ro rootfstype=ext4 console=hvc0"
it boots and I looks pretty much the same as with the pvh-ovmf magic. So perhaps the idea to use pygrub is wrong.
I don't know how good is supported pygrub nowadays, especially since PV support got deprecated in XCP-ng 8.2 then completely dropped in XCP-ng 8.3; with the pv-shim (pv-in-pvh) being the only remaining (but not endorsed) way of booting some PV guests today.
In my tests, pygrub was very clunky and rarely work as I expect. In practice (what upstream Xen Project mostly uses), it got replaced with pvgrub/pvhgrub and pvh-ovmf (OvmfXen) which are more reliable and less problematic security-wise (runs in the guest rather than in the dom0).
(for using pvhgrub, you need to set a pvhgrub binary (grub-i386-xen_pvh.bin
which is packaged by some distros like Alpine Linux'sgrub-xenhost
) as kernel like done with pvh-ovmf) -
@TeddyAstie Perfect! I didn't know pv(h)grub2. I tried and confirm if works fine with XCP-ng 8.3. So no more trouble with pygrub, PV-bootloader or anything pygrub-like to extract kernel and initrd from the VM's drive.
I'll summarize for others. You can choose between a generic UEFI boot (
xe vm-param-set PV-kernel=/var/lib/xcp/guest/pvh-ovmf.elf
) and direct grub execution (xe vm-param-set PV-kernel=/var/lib/xcp/guest/grub-i386-xen_pvh.bin
), just get the binary from standard grub2 package.BTW this answers @bogikornel question about booting a bios-mode VM. Not exactly a bios mode, but if a HVM is using bios firmware and grub as a bootloader, it can easily be converted to PVH and use the same grub config, with the same partition layout (works with mbr, no need for EFI partition ...).
-
@hoh
How can I boot it, because so far I have not been able to. I tried PV-bootloader=pygrub but it fails.
xl dmesg output(XEN) [939408.303345] d47v0 Triple fault - invoking HVM shutdown action 3 (XEN) [939408.303347] *** Dumping Dom47 vcpu#0 state: *** (XEN) [939408.303350] ----[ Xen-4.17.5-13 x86_64 debug=n Not tainted ]---- (XEN) [939408.303351] CPU: 0 (XEN) [939408.303352] RIP: 0008:[<000000000367a562>] (XEN) [939408.303354] RFLAGS: 0000000000010046 CONTEXT: hvm guest (d47v0) (XEN) [939408.303356] rax: 0000000000000000 rbx: 00000000566e6558 rcx: 0000000000000000 (XEN) [939408.303357] rdx: 00000000000000e9 rsi: ffffffff849600a6 rdi: 0000000000000004 (XEN) [939408.303359] rbp: 0000000040000000 rsp: 000000000373cf70 r8: 65584d4d566e6558 (XEN) [939408.303360] r9: 0000000000000026 r10: 6920485650206e65 r11: 7a696c616974696e (XEN) [939408.303361] r12: 0000000000000005 r13: 0000000000000000 r14: 0000000000000000 (XEN) [939408.303362] r15: 0000000000000000 cr0: 0000000080000011 cr4: 0000000000000020 (XEN) [939408.303363] cr3: 0000000002c10000 cr2: 0000000000000000 (XEN) [939408.303364] fsb: 0000000000000000 gsb: 0000000003738f80 gss: 0000000000000000 (XEN) [939408.303366] ds: 0010 es: 0010 fs: 0000 gs: 0000 ss: 0010 cs: 0008
-
@bogikornel As discussed in the thread, pygrub doesn't work.
(It might be fixable, because copying kernel and initrd to dom-0 and directly setting PV-kernel, PV-ramdisk and PV-args works, which is basically what pygrub is supposed to do. But I stopped investigating as pvhgrub is a much better option which acually works.)
You have 3 options
- copy kernel and initrd do dom-0 and configure
xe vm-param-set uuid=... domain-type=pvh xe vm-param-set uuid=... PV-kernel=/dom-0/path/to/kernel xe vm-param-set uuid=... PV-ramdisk=/dom-0/path/to/initrd xe vm-param-set uuid=... PV-args="root=... ro console=hvc0 ..." xe vm-param-clear uuid=... param-name=PV-bootloader
Not a very practical option, just PoC.
-
use pvh-ovmf, but this requires UEFI-enabled VM (ie. GPT disk layout with EFI partition and some EFI bootloader or kernel directly in EFI with proper config (or as UKI)
-
use pvhgrub. You need a recent grub2 to build the image for i386-xen_pvh target. Or just get the blob - this one is from Alpine package
curl https://dl-cdn.alpinelinux.org/alpine/edge/main/x86_64/grub-xenhost-2.12-r8.apk | tar -xzf - --strip-components=3 usr/lib/grub-xen/grub-i386-xen_pvh.bin
Save it to dom-0 (e.g. to /var/lib/xcp/guest/grub-i386-xen_pvh.bin) and configure the VM
xe vm-param-set uuid=... domain-type=pvh xe vm-param-set uuid=... PV-kernel=/var/lib/xcp/guest/grub-i386-xen_pvh.bin xe vm-param-clear uuid=... param-name=PV-ramdisk xe vm-param-clear uuid=... param-name=PV-args xe vm-param-clear uuid=... param-name=PV-bootloader
If the VM has valid grub2 config, it should work. Of course, you need a linux kernel with CONFIG_XEN_PVH enabled.