XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    windows + (PV | HVM | PVHVM | PVHv1 | PVHv2) , a little mess

    Scheduled Pinned Locked Moved Compute
    23 Posts 4 Posters 8.2k Views 2 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • olivierlambertO Offline
      olivierlambert Vates πŸͺ Co-Founder CEO
      last edited by olivierlambert

      It seems there's some confusion here. Please re-read my older post, I'm clearly explaining what's PV and HVM (and PVHVM when you have PV drivers). I don't understand how you could draw exact opposite conclusion on what I said.

      My post for last year just a bit on top, I'm saying:

      In XCP-ng right now, PVHVM is the best mode for all OS, yes. But it's not really a "mode". It's HVM plus PV drivers inside it.

      In fact, there's only PV or HVM.

      PVHVM could be translated as "HVM mode" and PV drivers working inside the guest (guest = VM).

      PV is removed progressively. Finally, PVHv2 will probably land one day, but not before some releases.

      This is my postulate (which also happening to be… a fact).

      So when you said "HVM templates for Windows don't provide PVHVM", it doesn't make sense. There's no PVHVM mode per se until you install the OS and get your PV drivers inside. There is NO "default" PVHVM mode. You boot on HVM, then load PV drivers on kernel boot. So it's either PV mode or HVM mode, and in HVM mode you can be "helped" by PV drivers. It's called PVHVM but it's not a "possible boot mode".

      PV drivers are bringing network and disk access without using QEMU emulation. HVM with full emulation is just used on boot, and will be replaced/switched during Operating System kernel boot (works roughlt the same way in Windows and Linux).

      For example on Linux, it's emulated until your exit Grub and the Linux kernel will detect Xen devices, and will load Xen PV drivers (that will avoid doing emulation). Same thing on Windows if you have PV drivers installed.

      Without PV drivers, you will rely on Qemu disk/network emulation.

      The diff between Linux and Windows: PV drivers are bundled in most distro's kernel. You have to install them on Windows.

      If you can't find PV drivers for Windows, you probably forgot to read the doc: https://xcp-ng.org/docs/guests.html#windows

      Also back on PV: PV mode only supports kernels that are virtualization aware (so it required modified kernels, good luck to find a PV kernel for Windows).

      PV mode made sense back in the days when there was no hardware virt. instruction in your CPUs and motherboards. But PV could be sum up as "fully software managed virtualization". So in fact, it's in general slower than HVM with hardware assisted virtualization. Because you have to deal with more feature from a software perspective instead of leaving the hardware doing it for you.

      Also, because you have to deal with everything in software, PV mode tends to be more affected by security issues on modern hardware. That's why it's deprecated and replaced slowly by PVH mode.

      That's why HVM with modern (ie <10y hardware) and PV drivers will almost every time beat PV guests.

      Also, it seems you did judged XOA too fast before having an opinion: you can have all features by installing it from Github directly: https://xen-orchestra.com/docs/installation.html#from-the-sources

      It's OK to be confused by this big world with a lot of terms and complicated technology. But be careful when you make statement without investigating more before posting πŸ™‚

      F 1 Reply Last reply Reply Quote 0
      • F Offline
        fnu @olivierlambert
        last edited by

        @olivierlambert said in windows + (PV | HVM | PVHVM | PVHv1 | PVHv2) , a little mess:

        So when you said "HVM templates for Windows don't provide PVHVM", it doesn't make sense.

        Yepp, sorry for confusion, I meant really the same, HVM with inside PV drivers, shortly spoken/written "PvHVM" ...

        @olivierlambert said in windows + (PV | HVM | PVHVM | PVHv1 | PVHv2) , a little mess:

        That's why HVM with modern (ie <10y hardware) and PV drivers will almost every time beat PV guests.

        I agree, at the end good old plain PVs proofed it, despite odd booting (PyGrub) or expensiv memory accesses ...

        But I see that approach only inside Linux HVMs but not in HVMs for Windows. Easy to check in "Device Manager", disks are "QEMU ATA" type so fully virtualized, as also Network, either RTL8169 or Intel E1000, ditto fully virtualized. A fresh windows installation inside XCP-NG 8.2.0 doesn't search for drivers of Xen devices (Net or IO) ... just "xenbus" seems kind of a matter ...

        Am I doing same on Debian based Xen Hypervisor (Kernel 4.19 & Xen 4.11), creating HVMs using .cfg files, I need to have xenvif, xennet and xenvbd available right at time of fresh Windows installation, no big deal, thanks to you guys. I am missing that approach in XCP-NG ... I want to have Windows in HVM based on Xen PV devices.

        And please enable UEFI for Linux HVMs ...

        1 Reply Last reply Reply Quote 0
        • olivierlambertO Offline
          olivierlambert Vates πŸͺ Co-Founder CEO
          last edited by olivierlambert

          So you probably didn't get drivers correctly installed then. You should have PV drivers for both network AND disk in Windows.

          You MUST install them yourself. It's in the documentation I linked just earlier.

          Also regarding UEFI, same thing: I don't get why you said it doesn't work. It is, just go in advanced tab of your VM, and enable it. Or even when you create your VM, select UEFI.

          F 1 Reply Last reply Reply Quote 0
          • F Offline
            fnu @olivierlambert
            last edited by

            @olivierlambert said in windows + (PV | HVM | PVHVM | PVHv1 | PVHv2) , a little mess:

            Or even when you create your VM, select UEFI.

            Hmm, I don't have UEFI as an option for Linux templated HVMs in "XCP-NG Center" ... ? Just "BIOS Boot" ...

            @olivierlambert said in windows + (PV | HVM | PVHVM | PVHv1 | PVHv2) , a little mess:

            You MUST install them yourself.

            Just in case you didn't understand, I did install them ... but no Xen PV devices do show up inside that Windows (WS2019 or Win10) installation, only "xenbus" under "System devices" ...

            Drivers from here:

            https://xenproject.org/downloads/windows-pv-drivers/windows-pv-drivers-9-series/windows-pv-drivers-9-0-0/

            F 1 Reply Last reply Reply Quote 0
            • F Offline
              fnu @fnu
              last edited by fnu

              Oh man, it does just work setting up a fresh Windows Server 2019 as HVM using XOA ... obviously not anymore with "XCP-NG Center".

              That XOA-HVM has been created with QEMU NVMe disks and I could install "XCP-ng Windows Guest Tools" succesfully, very first time. This did bend "Intel E1000" & "QEMU NVMe disk" into "XCP-ng PV Network" & "XENSRC PVDISK SCSI disk drive" ... πŸ‘πŸ»

              Too bad, "XCP-ng Center" would have been my preference ... don't have a Server/Cloud Farm to manage. And yes, with XOA I can define UEFI boot for Linux HVMs.

              So, let's see if I can retrofit my VM Windows backups into XCP-ng HVM with Guest tools.

              1 Reply Last reply Reply Quote 0
              • olivierlambertO Offline
                olivierlambert Vates πŸͺ Co-Founder CEO
                last edited by

                If you installed them and not having the disk/net drivers, it just meant they weren't installed correctly πŸ™‚

                Also, as stated in https://xcp-ng.org/docs/management.html XCP-ng Center is only community supported so not having all bells and whistles πŸ™‚

                F 1 Reply Last reply Reply Quote 0
                • F Offline
                  fnu @olivierlambert
                  last edited by fnu

                  @olivierlambert Well there was no hint that "XCP-ng Windows Guest Tools" does bend standard fully virtualized QEMU devices into Xen PV devices. Windows admins normally prefer to see unconfigured devices looking for drivers ... πŸ˜‰

                  On thing I still don't get, I always thought the HVM templates are on XenServer/XCP-ng itself. Why does "XCP-ng Center" create a different WS2019 HVM compared to XOA?

                  Center's HVM does provide QEMU ATA disks, XOA "QEMU NVMe disks", what might then be the key for the "XCP-ng Windows Guest Tools" installation ...

                  1 Reply Last reply Reply Quote 0
                  • olivierlambertO Offline
                    olivierlambert Vates πŸͺ Co-Founder CEO
                    last edited by

                    About PVHVM: no hint? What about my message I quoted myself explaining there's only HVM and PV modes and only PV drivers installed in a guest will "change" the guest behavior? In any case, that's part of guest agents/drivers section in our official doc, see https://xcp-ng.org/docs/guests.html#windows If it's not clear enough, you can contribute in this page, or let us know exactly what would you expect here πŸ™‚

                    Still some confusion, so let me explain: XAPI clients aren't providing anything. XO or XCP-ng Center aren't creating anything, they are like your TV remote (with more or less features).

                    There's no different templates between them, just different values that can modify the VM behavior. If you want to compare values, you can use xl (another XAPI client) to display VM details (xe vm-param-list uuid=<VM UUID>) and get the diff between them.

                    I suspect XCP-ng Center might use some specific options, but any UEFI guest will rely on QEMU NVMe disks if I remember correctly. So it's not Center or XO related, it's mainly UEFI or not.

                    This isn't related at all to getting PV drivers or not. QEMU NVMe is still emulation (a better one but still).

                    F 1 Reply Last reply Reply Quote 0
                    • stormiS Offline
                      stormi Vates πŸͺ XCP-ng Team
                      last edited by

                      Citrix pushed UEFI only for Windows VMs. It works for Linux VMs but they don't (or didn't) care much about it so they probably did not offer the option in XenCenter (what XCP-ng Center is based on), despite it working very well.

                      1 Reply Last reply Reply Quote 0
                      • F Offline
                        fnu @olivierlambert
                        last edited by

                        @olivierlambert said in windows + (PV | HVM | PVHVM | PVHv1 | PVHv2) , a little mess:

                        no hint?

                        Yes, no hint in any description, that after a new install, the Windows HVM does come up with "QEMU NVMe disk" and "Intel E1000". And then installation of "XCP-ng Windows Guest Tools" does push "QEMU NVMe disk" to become "XENSRC PVDISK SCSI disk drive" and "Intel E1000" to become "XCP-ng PV Network".

                        More common here is, the Windows HVM comes up with some "blank devices", where drives need to be installed, to become finally a so called "PvHVM" ...

                        One topic to be clarified, is it possible to retrofit an Windows Image backup from an other virtual environment to become a HVM with PV drivers ...

                        And I'm still not totally happy with XOA, bit overloaded for my purpose, does eat 2GiB memory of my by purpose tight calculated home server and additional hassle if my single server does need e.g. a reboot. For a setup like mine, off-band management with "XCP-ng Center" does have some advantages ...

                        1 Reply Last reply Reply Quote 0
                        • olivierlambertO Offline
                          olivierlambert Vates πŸͺ Co-Founder CEO
                          last edited by olivierlambert

                          after a new install, the Windows HVM does come up with "QEMU NVMe disk" and "Intel E1000". And then installation of "XCP-ng Windows Guest Tools" does push "QEMU NVMe disk" to become "XENSRC PVDISK SCSI disk drive" and "Intel E1000" to become "XCP-ng PV Network".

                          Good, you got it right now πŸ™‚

                          More common here is, the Windows HVM comes up with some "blank devices", where drives need to be installed, to become finally a so called "PvHVM" ...

                          You can't have "PVHVM ready template" without having an Operating System running with drivers enabled. So, as I already said multiple times: PVHVM isn't a virtualization mode by itself (it's PV or HVM). So you can't just "flip the PVHVM" button during the VM creation. The closest alternative would be to transform a VM into a template, so after each VM creation with this template, you'll enjoy PVHVM out of the box πŸ™‚

                          Regarding XOA, I understand your point of view and we didn't wait for your feedback to offer a XCP-ng Center alternative with something called… XO Lite πŸ™‚ https://xen-orchestra.com/blog/xen-orchestra-5-59/#xolite

                          F 1 Reply Last reply Reply Quote 0
                          • F Offline
                            fnu @olivierlambert
                            last edited by fnu

                            @olivierlambert said in windows + (PV | HVM | PVHVM | PVHv1 | PVHv2) , a little mess:

                            You can't have "PVHVM ready template" without having an Operating System running with drivers enabled.

                            Well, sorry, I disagree here, you can, but then you need to have parallel a second virtual CD-ROM drive with the drivers on it. No problem, we are defining virtual hardware with a mouse click, just set 2 CD-ROM devices and give the drivers to Windows right at fresh installation ...

                            A XL based definition might look like this:

                            name = "winserv"
                            type = "hvm"
                            firmware = "uefi"
                            device_model_version = "qemu-xen"
                            device_model_override = "/usr/bin/qemu-system-x86_64"
                            xen_platform_pci = 1
                            vcpus = 4
                            memory = 8192
                            #maxmem = 16384
                            vga = "stdvga"
                            vif = [ 'mac=AA:BB:CC:DD:EE:FF, bridge=xenbr0, type=vif' ]
                            disk = [
                                     '/dev/vg/winserv-disk1,,xvda',
                                     '/srv/xen/iso_import/de_windows_server_2019_essentials_updated_sept_2019_x64_dvd_1a60868a.iso,,xvdc:cdrom',
                                     '/srv/xen/iso_import/Xen64drivers.iso,,xvdd:cdrom'
                                   ]
                            # boot on floppy (a), hard disk (c) or CD-ROM (d)
                            # default: hard disk, cd-rom, floppy
                            boot = "dc"
                            usb = 1
                            usbdevice = [ 'tablet' ]
                            usbctrl = [ 'version=3, ports=4' ]
                            usbdev = [ 'hostbus=4, hostaddr=2' ]
                            vnc = 1
                            vncconsole = 1
                            vncdisplay = 10
                            vnclisten = "0.0.0.0"
                            keymap = 'de'
                            localtime = 1
                            viridian = 1
                            

                            After installation disk line like that:

                            disk = [ '/dev/vg/winserv-disk1,,xvda', ',,xvdc:cdrom', ',,xvdd:cdrom' ]
                            

                            Pop & un-pop ISO using "xl cd-insert/cd-eject ..."

                            Same procedure within VMWare, define virtio for everything and mount proper driver ISO in parallel while installation from scratch.

                            And that common practise, similar to real HW installation, where devices are builtin and Windows doesn't know about drivers at installation time ...

                            1 Reply Last reply Reply Quote 0
                            • olivierlambertO Offline
                              olivierlambert Vates πŸͺ Co-Founder CEO
                              last edited by

                              So you said you disagree but you are giving an example with xl which isn't the toolstack for XCP-ng πŸ˜•

                              I'm answering for XCP-ng, not for something else.

                              Also, you are consistently confused on how things are working. Even by having the drivers installed during OS installation, you can't have a PVHVM template, such thing doesn't exist. Period.

                              When you boot, you will use emulation UNTIL an operating system will take it from there. Yes, you can do that with any Ubuntu Live CD that won't install anything in any hard drive. But still, up to grub, you are using emulation (HVM). The VM is still HVM. It will be HVM forever (as long you didn't convert to PV). We use the PVHVM terms, it just means that now your HVM guest is able to talk with PV drivers. But PVHVM mode doesn't exist in Xen code base. Search for it if you like.

                              F 1 Reply Last reply Reply Quote 0
                              • F Offline
                                fnu @olivierlambert
                                last edited by fnu

                                @olivierlambert said in windows + (PV | HVM | PVHVM | PVHv1 | PVHv2) , a little mess:

                                Also, you are consistently confused on how things are working.

                                The question is who's confused or ignorant ... I stated above 2 or 3 times, I am absolute clear what makes a HVM a so "called PvHVM", just the driver addon afterwards 😠

                                But you are confused in saying it is not possible to install Windows on "unknown" hardware, where Windows doesn't have drivers builtin. This is in fact possible since Windows NT 4.0 times back in the 1990s. It is is easily possible to inject drivers right at the start of installation, all kind of, not only block storage drivers ...

                                1 Reply Last reply Reply Quote 0
                                • olivierlambertO Offline
                                  olivierlambert Vates πŸͺ Co-Founder CEO
                                  last edited by

                                  I never said THAT was impossible, as long the OS kernel started, you can use whatever drivers you like. I was just answering the fact PVHVM was a real mode since the start, and it's not. You can't have a PVHVM template, it doesn't make sense (unless you have already an OS installed).

                                  That's all I said 🀷 You are moving the goalposts every time.

                                  1 Reply Last reply Reply Quote 0
                                  • First post
                                    Last post