Team - Hypervisor & Kernel

Private

Posts

  • RE: Epyc VM to VM networking slow

    @Forza said in Epyc VM to VM networking slow:

    olivierlambert said in Epyc VM to VM networking slow:

    If we become partners officially, we'll be able to have more advanced accesses with their teams. I still have hope, it's just that the pace isn't on me.

    Hi, is there anything new to report on this? We have very powerful machines, but unfortunately limited by this stubborn issue.

    Can you test https://xcp-ng.org/forum/topic/10862/early-testable-pvh-support ?

    We observe very significant improvements on AMD EPYC with PVH.

    We're still pin-pointing the issue with HVM, the current hypothesis is a issue regarding memory typing (grant-table accessed as uncacheable(UC) which is very slow) related to grant-table positionning in HVM.

  • Early testable PVH support

    Hello !

    Xen supports 3 virtualization modes, PV (deprecated), HVM (used in XCP-ng) and PVH.
    While HVM is supported in XCP-ng (and used), PVH hasn't been integrated yet, but today in XCP-ng 8.3 we have some early support for it.

    The PVH mode has been officially introduced in Xen 4.10 as leaner, simpler variant of HVM (it was initially named HVM-lite) with little to no emulation, only PV devices, and less overall complexity.
    It aims to be a great and simpler alternative to traditional HVM for modern guests.

    A quick comparison of all modes
    PV mode :

    • needs specific guest support
    • only PV devices (no legacy hardware)
    • relies on PV MMU (less efficient than VT-x EPT/AMD-V NPT overall, but works without virtualization technologies)
    • unsafe against Spectre-style attacks
    • supports: direct kernel boot, pygrub
    • deprecated

    HVM mode :

    • emulate a real-behaving machine (using QEMU)
      • including legacy platform hardware (IOAPIC, HPET, PIT, PIC, ...)
      • including (maybe legacy) I/O hardware (network card, storage ...)
      • some can be disabled by the guest (PVHVM), but they exist at the start of the guest
    • relies on VT-x/AMD-V
    • traditional PC boot flow (BIOS/UEFI)
    • optional PV devices (opt-in by guest; PVHVM)
    • performs better than PV mode on most machines
    • compatible with pretty much all guests (including Windows and legacy OS)

    PVH mode :

    • relies on VT-x/AMD-V (regarding that, on the Xen side, it's using the same code as HVM)
    • minimal emulation (e.g no QEMU), way simpler overall, lower overhead
    • only PV devices
    • support : direct kernel boot (like PV), PVH-GRUB, or UEFI boot (PVH-OVMF)
    • needs guest support (but much less intrusive than PV)
    • works with most Linux distros and most BSD; doesn't work with Windows (yet)

    Installation

    🚧 Keep in mind that this is very experimental and not officially supported. 🚧

    PVH vncterm patches (optional)

    While XCP-ng 8.3 actually has support for PVH, due to a XAPI bug, you will not be able to access the guest console. I provide a patched XAPI with a patched console.

    # Download repo file for XCP-ng 8.3
    wget https://koji.xcp-ng.org/repos/user/8/8.3/xcpng-users.repo -O /etc/yum.repos.d/xcpng-users.repo
    
    # You may need to update to testing repositories.
    yum update --enablerepo=xcp-ng-testing
    
    # Installing the patched XAPI packages (you should see `.pvh` XAPI packages)
    yum update --enablerepo=xcp-ng-tae2
    

    This is optional, but you probably want that to see what's going on in your guest without having to rely on SSH or xl console.

    Making/converting into a PVH guest

    You can convert any guest into a PVH guest by modifying its domain-type parameter.

    xe vm-param-set uuid={UUID} domain-type=pvh
    

    And revert this change by changing it back to HVM

    xe vm-param-set uuid={UUID} domain-type=hvm
    

    PVH OVMF (boot using UEFI)

    You also need a PVH-specific OVMF build that can be used to boot the guest in UEFI mode.

    Currently, there is no package available for getting it, but I provide a custom-built OVMF with PVH support
    https://nextcloud.vates.tech/index.php/s/L8a4meCLp8aZnGZ

    You need to place this file in the host as /var/lib/xcp/guest/pvh-ovmf.elf (create all missing parents).
    Then sets it as PV-kernel

    xe vm-param-set uuid={UUID} PV-kernel=/var/lib/xcp/guest/pvh-ovmf.elf
    

    Once done, you can boot your guest as usual.

    Tested guests

    On many Linux distros, you need to add console=hvc0 in the cmdline, otherwise, you may not have access to a PV console.

    • Alpine Linux
    • Debian

    Known limitations

    • Some stats shows "no stats" (XAPI bug ?)
    • No support for booting from ISO, you can workaround this by importing your iso as a disk and using it as read-only disk
    • No live migration support (or at least, don't expect it to work properly)
    • No PCI passthrough support
    • No actual display (only PV console)
  • RE: Install guest tools on Windows server 2019 VM

    @Paolo Yes, you should uninstall that first. You can also try XenClean to remove all the remaining Xen drivers before installing XS 9.4.0 drivers.

  • RE: Install guest tools on Windows server 2019 VM

    @Paolo Did you have any third-party storage drivers installed on your VM before installing the guest tools? The Xen driver doesn't like those.