windows + (PV | HVM | PVHVM | PVHv1 | PVHv2) , a little mess
-
Hello everyone,
After googling aboit it to refresh concepts, I've ended up confused, coul anybody correct me?In short:
PV : The best to emulate when no hw assisted exists (really old proccesors). Need kernel modification, no problem for linux.
HVM: When HW help virtualization, present in nowadays proccessors. No Need kernel modification and allow SO act over real HW
PVHVM: The best of both worlds, PV improves it's performance helped by HW assistance
PVH: This concept is new for me, and accordding to this post of Oliver, there 2 versions, v1 and v2.
After read the post, the conclusion for xcp-ng is that when I see a VM working in PV mode , it's indeed PVHv1 and when I see HVM we are really talking about PVHv2. Am I wrong? pv, hvm and pvhvm are no longer in use under xcp-ng, right?An finally, must Windows VM work in HVM or PV?, Because I undertand that HVM is the best way, but citrix tools refers as PV drivers, not HVM drivers.
P.S. I moved the most of my VMs to HVM when specter and meltdown vulnerabilities.
As always, thanks in advance for your time.
-
Hi,
- PVHVM is any HVM guest with PV tools (all Linux, or Windows with PV drivers). PVHVM isn't perfect because there's still some emulation at some places
- PVHv2 isn't really production ready (in fact, somehow but not at a satisfactory level) and it's not integrated at all in XCP-ng. So it's just not there for you.
- PV drivers means you are "closer" to PV in the way you are removing emulation layers that are by default present in HVM. PV drivers means "PV emulated device drivers" NOT "drivers for PV guests".
-
So, if I've undertood you well, the mode to get the best performance is HVM, which gets PVHVM under Linux VM when tools/management agent is installed (the drivers are currently in the kernel) and gets PVHVM after install PV drivers and management agent on Windows OS. Rigth?
-
In XCP-ng right now, PVHVM is the best mode for all OS, yes. But it's not really a "mode". It's HVM plus PV drivers inside it.
In fact, there's only PV or HVM.
PVHVM could be translated as "HVM mode" and PV drivers working inside the guest (guest = VM).
PV is removed progressively. Finally, PVHv2 will probably land one day, but not before some releases.
-
Could I suggest to update https://xen-orchestra.com/blog/xen-virtualization-modes/ with your answer as the current state-of-the-art?
Do you want me to post my question to get your answer? It's clear for me now.Another solved post
Regards
-
I don't see what's different there?
-
Well, for me was not clear untill your current answer. As you can see, my conclussions were wrong, but that was just a suggestion.
-
@olivierlambert Well if XCP-NG would really bring what you postulate, it would be great.
But IMHO the truth looks different, at least for me today. HVM templates for Windows usage don't provide PvHVM capability, it is just fully virtualized. Only xenbus drivers are installed, everything else is QEMU fully virtualized, IO & Network, so does burn needless Hypervisor CPU time and power. Looking to this, makes arguments against PVs so called disadvantages on memory access questionable ...
Underlying XenProject stack does provide fully PvHVM capability for Windows in HVM and there are drivers for all of that. "Xen PV Storage Host Adapter", "XENSRC PVDISK SCSI Disk Device", "Xen PV Network Device", "Xen PV Network Class", "Xen PV Console", "Xen PV Bus", on top USB3 capability for HVM. And yes, those Windows (Pv)HVMs are lightning fast, tested with Debian & Ubuntu included XenProject 4.11 stack ...
On top I don't find any installable host agent for Windows HVMs inside XCP-NG, not on WS2019 or WIN10.
So, only Linux guests can be installed as PvHVM, but does that make senses? Every Linux Kernel since version 2.6 brings xen device/driver capabilities, there was never any need to modify any kernel of any known Linux distribution, runable inside a PV. Sure, not all bring XenProject Hypervisor, but that is not the question here.
Killing PV in favor to Linux PvHVM kills Xen's biggest differentiator over other virtualizations, one of its beauties. Xen on ARM is e.g. only possible with good old PV capability ...
I had the plan to restructure my home IT back to my roots, fully open source, happy seeing XenServer as XCP-NG still alive. Great, with an relatively actual kernel, fully 64-bit, UEFI bootable. After testing a while I am rather disenchanted, looking to HVMs for Windows just being fully virtualized, but UEFI capable, and Linux no PV just HVM/PvHVM, not UEFI capable, but what is standard today. Try to to install Ubuntu Server 20.04.2 inside that HVM ...
Not even talking about server managment, ok, good old Windows Center software still available, not ideal, would prefer a sneek HTML5 GUI. XOA looks like vSphere vCenter's ugly sister, bothering permanently with pay options.
Don't get me wrong, I fully understand open source developers cannot live from air and love, but that is absolutely no base to hassle users permanently poking to pay options. And overall the GUI structure is complex and convoluted ... why can't it be easy and good structured like the XCP-NG Center ... ?
-
It seems there's some confusion here. Please re-read my older post, I'm clearly explaining what's PV and HVM (and PVHVM when you have PV drivers). I don't understand how you could draw exact opposite conclusion on what I said.
My post for last year just a bit on top, I'm saying:
In XCP-ng right now, PVHVM is the best mode for all OS, yes. But it's not really a "mode". It's HVM plus PV drivers inside it.
In fact, there's only PV or HVM.
PVHVM could be translated as "HVM mode" and PV drivers working inside the guest (guest = VM).
PV is removed progressively. Finally, PVHv2 will probably land one day, but not before some releases.
This is my postulate (which also happening to be… a fact).
So when you said "HVM templates for Windows don't provide PVHVM", it doesn't make sense. There's no PVHVM mode per se until you install the OS and get your PV drivers inside. There is NO "default" PVHVM mode. You boot on HVM, then load PV drivers on kernel boot. So it's either PV mode or HVM mode, and in HVM mode you can be "helped" by PV drivers. It's called PVHVM but it's not a "possible boot mode".
PV drivers are bringing network and disk access without using QEMU emulation. HVM with full emulation is just used on boot, and will be replaced/switched during Operating System kernel boot (works roughlt the same way in Windows and Linux).
For example on Linux, it's emulated until your exit Grub and the Linux kernel will detect Xen devices, and will load Xen PV drivers (that will avoid doing emulation). Same thing on Windows if you have PV drivers installed.
Without PV drivers, you will rely on Qemu disk/network emulation.
The diff between Linux and Windows: PV drivers are bundled in most distro's kernel. You have to install them on Windows.
If you can't find PV drivers for Windows, you probably forgot to read the doc: https://xcp-ng.org/docs/guests.html#windows
Also back on PV: PV mode only supports kernels that are virtualization aware (so it required modified kernels, good luck to find a PV kernel for Windows).
PV mode made sense back in the days when there was no hardware virt. instruction in your CPUs and motherboards. But PV could be sum up as "fully software managed virtualization". So in fact, it's in general slower than HVM with hardware assisted virtualization. Because you have to deal with more feature from a software perspective instead of leaving the hardware doing it for you.
Also, because you have to deal with everything in software, PV mode tends to be more affected by security issues on modern hardware. That's why it's deprecated and replaced slowly by PVH mode.
That's why HVM with modern (ie <10y hardware) and PV drivers will almost every time beat PV guests.
Also, it seems you did judged XOA too fast before having an opinion: you can have all features by installing it from Github directly: https://xen-orchestra.com/docs/installation.html#from-the-sources
It's OK to be confused by this big world with a lot of terms and complicated technology. But be careful when you make statement without investigating more before posting
-
@olivierlambert said in windows + (PV | HVM | PVHVM | PVHv1 | PVHv2) , a little mess:
So when you said "HVM templates for Windows don't provide PVHVM", it doesn't make sense.
Yepp, sorry for confusion, I meant really the same, HVM with inside PV drivers, shortly spoken/written "PvHVM" ...
@olivierlambert said in windows + (PV | HVM | PVHVM | PVHv1 | PVHv2) , a little mess:
That's why HVM with modern (ie <10y hardware) and PV drivers will almost every time beat PV guests.
I agree, at the end good old plain PVs proofed it, despite odd booting (PyGrub) or expensiv memory accesses ...
But I see that approach only inside Linux HVMs but not in HVMs for Windows. Easy to check in "Device Manager", disks are "QEMU ATA" type so fully virtualized, as also Network, either RTL8169 or Intel E1000, ditto fully virtualized. A fresh windows installation inside XCP-NG 8.2.0 doesn't search for drivers of Xen devices (Net or IO) ... just "xenbus" seems kind of a matter ...
Am I doing same on Debian based Xen Hypervisor (Kernel 4.19 & Xen 4.11), creating HVMs using .cfg files, I need to have xenvif, xennet and xenvbd available right at time of fresh Windows installation, no big deal, thanks to you guys. I am missing that approach in XCP-NG ... I want to have Windows in HVM based on Xen PV devices.
And please enable UEFI for Linux HVMs ...
-
So you probably didn't get drivers correctly installed then. You should have PV drivers for both network AND disk in Windows.
You MUST install them yourself. It's in the documentation I linked just earlier.
Also regarding UEFI, same thing: I don't get why you said it doesn't work. It is, just go in advanced tab of your VM, and enable it. Or even when you create your VM, select UEFI.
-
@olivierlambert said in windows + (PV | HVM | PVHVM | PVHv1 | PVHv2) , a little mess:
Or even when you create your VM, select UEFI.
Hmm, I don't have UEFI as an option for Linux templated HVMs in "XCP-NG Center" ... ? Just "BIOS Boot" ...
@olivierlambert said in windows + (PV | HVM | PVHVM | PVHv1 | PVHv2) , a little mess:
You MUST install them yourself.
Just in case you didn't understand, I did install them ... but no Xen PV devices do show up inside that Windows (WS2019 or Win10) installation, only "xenbus" under "System devices" ...
Drivers from here:
-
Oh man, it does just work setting up a fresh Windows Server 2019 as HVM using XOA ... obviously not anymore with "XCP-NG Center".
That XOA-HVM has been created with QEMU NVMe disks and I could install "XCP-ng Windows Guest Tools" succesfully, very first time. This did bend "Intel E1000" & "QEMU NVMe disk" into "XCP-ng PV Network" & "XENSRC PVDISK SCSI disk drive" ...
Too bad, "XCP-ng Center" would have been my preference ... don't have a Server/Cloud Farm to manage. And yes, with XOA I can define UEFI boot for Linux HVMs.
So, let's see if I can retrofit my VM Windows backups into XCP-ng HVM with Guest tools.
-
If you installed them and not having the disk/net drivers, it just meant they weren't installed correctly
Also, as stated in https://xcp-ng.org/docs/management.html XCP-ng Center is only community supported so not having all bells and whistles
-
@olivierlambert Well there was no hint that "XCP-ng Windows Guest Tools" does bend standard fully virtualized QEMU devices into Xen PV devices. Windows admins normally prefer to see unconfigured devices looking for drivers ...
On thing I still don't get, I always thought the HVM templates are on XenServer/XCP-ng itself. Why does "XCP-ng Center" create a different WS2019 HVM compared to XOA?
Center's HVM does provide QEMU ATA disks, XOA "QEMU NVMe disks", what might then be the key for the "XCP-ng Windows Guest Tools" installation ...
-
About PVHVM: no hint? What about my message I quoted myself explaining there's only HVM and PV modes and only PV drivers installed in a guest will "change" the guest behavior? In any case, that's part of guest agents/drivers section in our official doc, see https://xcp-ng.org/docs/guests.html#windows If it's not clear enough, you can contribute in this page, or let us know exactly what would you expect here
Still some confusion, so let me explain: XAPI clients aren't providing anything. XO or XCP-ng Center aren't creating anything, they are like your TV remote (with more or less features).
There's no different templates between them, just different values that can modify the VM behavior. If you want to compare values, you can use
xl
(another XAPI client) to display VM details (xe vm-param-list uuid=<VM UUID>
) and get the diff between them.I suspect XCP-ng Center might use some specific options, but any UEFI guest will rely on QEMU NVMe disks if I remember correctly. So it's not Center or XO related, it's mainly UEFI or not.
This isn't related at all to getting PV drivers or not. QEMU NVMe is still emulation (a better one but still).
-
Citrix pushed UEFI only for Windows VMs. It works for Linux VMs but they don't (or didn't) care much about it so they probably did not offer the option in XenCenter (what XCP-ng Center is based on), despite it working very well.
-
@olivierlambert said in windows + (PV | HVM | PVHVM | PVHv1 | PVHv2) , a little mess:
no hint?
Yes, no hint in any description, that after a new install, the Windows HVM does come up with "QEMU NVMe disk" and "Intel E1000". And then installation of "XCP-ng Windows Guest Tools" does push "QEMU NVMe disk" to become "XENSRC PVDISK SCSI disk drive" and "Intel E1000" to become "XCP-ng PV Network".
More common here is, the Windows HVM comes up with some "blank devices", where drives need to be installed, to become finally a so called "PvHVM" ...
One topic to be clarified, is it possible to retrofit an Windows Image backup from an other virtual environment to become a HVM with PV drivers ...
And I'm still not totally happy with XOA, bit overloaded for my purpose, does eat 2GiB memory of my by purpose tight calculated home server and additional hassle if my single server does need e.g. a reboot. For a setup like mine, off-band management with "XCP-ng Center" does have some advantages ...
-
after a new install, the Windows HVM does come up with "QEMU NVMe disk" and "Intel E1000". And then installation of "XCP-ng Windows Guest Tools" does push "QEMU NVMe disk" to become "XENSRC PVDISK SCSI disk drive" and "Intel E1000" to become "XCP-ng PV Network".
Good, you got it right now
More common here is, the Windows HVM comes up with some "blank devices", where drives need to be installed, to become finally a so called "PvHVM" ...
You can't have "PVHVM ready template" without having an Operating System running with drivers enabled. So, as I already said multiple times: PVHVM isn't a virtualization mode by itself (it's PV or HVM). So you can't just "flip the PVHVM" button during the VM creation. The closest alternative would be to transform a VM into a template, so after each VM creation with this template, you'll enjoy PVHVM out of the box
Regarding XOA, I understand your point of view and we didn't wait for your feedback to offer a XCP-ng Center alternative with something called… XO Lite https://xen-orchestra.com/blog/xen-orchestra-5-59/#xolite
-
@olivierlambert said in windows + (PV | HVM | PVHVM | PVHv1 | PVHv2) , a little mess:
You can't have "PVHVM ready template" without having an Operating System running with drivers enabled.
Well, sorry, I disagree here, you can, but then you need to have parallel a second virtual CD-ROM drive with the drivers on it. No problem, we are defining virtual hardware with a mouse click, just set 2 CD-ROM devices and give the drivers to Windows right at fresh installation ...
A XL based definition might look like this:
name = "winserv" type = "hvm" firmware = "uefi" device_model_version = "qemu-xen" device_model_override = "/usr/bin/qemu-system-x86_64" xen_platform_pci = 1 vcpus = 4 memory = 8192 #maxmem = 16384 vga = "stdvga" vif = [ 'mac=AA:BB:CC:DD:EE:FF, bridge=xenbr0, type=vif' ] disk = [ '/dev/vg/winserv-disk1,,xvda', '/srv/xen/iso_import/de_windows_server_2019_essentials_updated_sept_2019_x64_dvd_1a60868a.iso,,xvdc:cdrom', '/srv/xen/iso_import/Xen64drivers.iso,,xvdd:cdrom' ] # boot on floppy (a), hard disk (c) or CD-ROM (d) # default: hard disk, cd-rom, floppy boot = "dc" usb = 1 usbdevice = [ 'tablet' ] usbctrl = [ 'version=3, ports=4' ] usbdev = [ 'hostbus=4, hostaddr=2' ] vnc = 1 vncconsole = 1 vncdisplay = 10 vnclisten = "0.0.0.0" keymap = 'de' localtime = 1 viridian = 1
After installation disk line like that:
disk = [ '/dev/vg/winserv-disk1,,xvda', ',,xvdc:cdrom', ',,xvdd:cdrom' ]
Pop & un-pop ISO using "xl cd-insert/cd-eject ..."
Same procedure within VMWare, define virtio for everything and mount proper driver ISO in parallel while installation from scratch.
And that common practise, similar to real HW installation, where devices are builtin and Windows doesn't know about drivers at installation time ...