windows + (PV | HVM | PVHVM | PVHv1 | PVHv2) , a little mess
-
@olivierlambert said in windows + (PV | HVM | PVHVM | PVHv1 | PVHv2) , a little mess:
Or even when you create your VM, select UEFI.
Hmm, I don't have UEFI as an option for Linux templated HVMs in "XCP-NG Center" ... ? Just "BIOS Boot" ...
@olivierlambert said in windows + (PV | HVM | PVHVM | PVHv1 | PVHv2) , a little mess:
You MUST install them yourself.
Just in case you didn't understand, I did install them ... but no Xen PV devices do show up inside that Windows (WS2019 or Win10) installation, only "xenbus" under "System devices" ...
Drivers from here:
-
Oh man, it does just work setting up a fresh Windows Server 2019 as HVM using XOA ... obviously not anymore with "XCP-NG Center".
That XOA-HVM has been created with QEMU NVMe disks and I could install "XCP-ng Windows Guest Tools" succesfully, very first time. This did bend "Intel E1000" & "QEMU NVMe disk" into "XCP-ng PV Network" & "XENSRC PVDISK SCSI disk drive" ...
Too bad, "XCP-ng Center" would have been my preference ... don't have a Server/Cloud Farm to manage. And yes, with XOA I can define UEFI boot for Linux HVMs.
So, let's see if I can retrofit my VM Windows backups into XCP-ng HVM with Guest tools.
-
If you installed them and not having the disk/net drivers, it just meant they weren't installed correctly
Also, as stated in https://xcp-ng.org/docs/management.html XCP-ng Center is only community supported so not having all bells and whistles
-
@olivierlambert Well there was no hint that "XCP-ng Windows Guest Tools" does bend standard fully virtualized QEMU devices into Xen PV devices. Windows admins normally prefer to see unconfigured devices looking for drivers ...
On thing I still don't get, I always thought the HVM templates are on XenServer/XCP-ng itself. Why does "XCP-ng Center" create a different WS2019 HVM compared to XOA?
Center's HVM does provide QEMU ATA disks, XOA "QEMU NVMe disks", what might then be the key for the "XCP-ng Windows Guest Tools" installation ...
-
About PVHVM: no hint? What about my message I quoted myself explaining there's only HVM and PV modes and only PV drivers installed in a guest will "change" the guest behavior? In any case, that's part of guest agents/drivers section in our official doc, see https://xcp-ng.org/docs/guests.html#windows If it's not clear enough, you can contribute in this page, or let us know exactly what would you expect here
Still some confusion, so let me explain: XAPI clients aren't providing anything. XO or XCP-ng Center aren't creating anything, they are like your TV remote (with more or less features).
There's no different templates between them, just different values that can modify the VM behavior. If you want to compare values, you can use
xl
(another XAPI client) to display VM details (xe vm-param-list uuid=<VM UUID>
) and get the diff between them.I suspect XCP-ng Center might use some specific options, but any UEFI guest will rely on QEMU NVMe disks if I remember correctly. So it's not Center or XO related, it's mainly UEFI or not.
This isn't related at all to getting PV drivers or not. QEMU NVMe is still emulation (a better one but still).
-
Citrix pushed UEFI only for Windows VMs. It works for Linux VMs but they don't (or didn't) care much about it so they probably did not offer the option in XenCenter (what XCP-ng Center is based on), despite it working very well.
-
@olivierlambert said in windows + (PV | HVM | PVHVM | PVHv1 | PVHv2) , a little mess:
no hint?
Yes, no hint in any description, that after a new install, the Windows HVM does come up with "QEMU NVMe disk" and "Intel E1000". And then installation of "XCP-ng Windows Guest Tools" does push "QEMU NVMe disk" to become "XENSRC PVDISK SCSI disk drive" and "Intel E1000" to become "XCP-ng PV Network".
More common here is, the Windows HVM comes up with some "blank devices", where drives need to be installed, to become finally a so called "PvHVM" ...
One topic to be clarified, is it possible to retrofit an Windows Image backup from an other virtual environment to become a HVM with PV drivers ...
And I'm still not totally happy with XOA, bit overloaded for my purpose, does eat 2GiB memory of my by purpose tight calculated home server and additional hassle if my single server does need e.g. a reboot. For a setup like mine, off-band management with "XCP-ng Center" does have some advantages ...
-
after a new install, the Windows HVM does come up with "QEMU NVMe disk" and "Intel E1000". And then installation of "XCP-ng Windows Guest Tools" does push "QEMU NVMe disk" to become "XENSRC PVDISK SCSI disk drive" and "Intel E1000" to become "XCP-ng PV Network".
Good, you got it right now
More common here is, the Windows HVM comes up with some "blank devices", where drives need to be installed, to become finally a so called "PvHVM" ...
You can't have "PVHVM ready template" without having an Operating System running with drivers enabled. So, as I already said multiple times: PVHVM isn't a virtualization mode by itself (it's PV or HVM). So you can't just "flip the PVHVM" button during the VM creation. The closest alternative would be to transform a VM into a template, so after each VM creation with this template, you'll enjoy PVHVM out of the box
Regarding XOA, I understand your point of view and we didn't wait for your feedback to offer a XCP-ng Center alternative with something called⦠XO Lite https://xen-orchestra.com/blog/xen-orchestra-5-59/#xolite
-
@olivierlambert said in windows + (PV | HVM | PVHVM | PVHv1 | PVHv2) , a little mess:
You can't have "PVHVM ready template" without having an Operating System running with drivers enabled.
Well, sorry, I disagree here, you can, but then you need to have parallel a second virtual CD-ROM drive with the drivers on it. No problem, we are defining virtual hardware with a mouse click, just set 2 CD-ROM devices and give the drivers to Windows right at fresh installation ...
A XL based definition might look like this:
name = "winserv" type = "hvm" firmware = "uefi" device_model_version = "qemu-xen" device_model_override = "/usr/bin/qemu-system-x86_64" xen_platform_pci = 1 vcpus = 4 memory = 8192 #maxmem = 16384 vga = "stdvga" vif = [ 'mac=AA:BB:CC:DD:EE:FF, bridge=xenbr0, type=vif' ] disk = [ '/dev/vg/winserv-disk1,,xvda', '/srv/xen/iso_import/de_windows_server_2019_essentials_updated_sept_2019_x64_dvd_1a60868a.iso,,xvdc:cdrom', '/srv/xen/iso_import/Xen64drivers.iso,,xvdd:cdrom' ] # boot on floppy (a), hard disk (c) or CD-ROM (d) # default: hard disk, cd-rom, floppy boot = "dc" usb = 1 usbdevice = [ 'tablet' ] usbctrl = [ 'version=3, ports=4' ] usbdev = [ 'hostbus=4, hostaddr=2' ] vnc = 1 vncconsole = 1 vncdisplay = 10 vnclisten = "0.0.0.0" keymap = 'de' localtime = 1 viridian = 1
After installation disk line like that:
disk = [ '/dev/vg/winserv-disk1,,xvda', ',,xvdc:cdrom', ',,xvdd:cdrom' ]
Pop & un-pop ISO using "xl cd-insert/cd-eject ..."
Same procedure within VMWare, define virtio for everything and mount proper driver ISO in parallel while installation from scratch.
And that common practise, similar to real HW installation, where devices are builtin and Windows doesn't know about drivers at installation time ...
-
So you said you disagree but you are giving an example with
xl
which isn't the toolstack for XCP-ngI'm answering for XCP-ng, not for something else.
Also, you are consistently confused on how things are working. Even by having the drivers installed during OS installation, you can't have a PVHVM template, such thing doesn't exist. Period.
When you boot, you will use emulation UNTIL an operating system will take it from there. Yes, you can do that with any Ubuntu Live CD that won't install anything in any hard drive. But still, up to grub, you are using emulation (HVM). The VM is still HVM. It will be HVM forever (as long you didn't convert to PV). We use the PVHVM terms, it just means that now your HVM guest is able to talk with PV drivers. But PVHVM mode doesn't exist in Xen code base. Search for it if you like.
-
@olivierlambert said in windows + (PV | HVM | PVHVM | PVHv1 | PVHv2) , a little mess:
Also, you are consistently confused on how things are working.
The question is who's confused or ignorant ... I stated above 2 or 3 times, I am absolute clear what makes a HVM a so "called PvHVM", just the driver addon afterwards
But you are confused in saying it is not possible to install Windows on "unknown" hardware, where Windows doesn't have drivers builtin. This is in fact possible since Windows NT 4.0 times back in the 1990s. It is is easily possible to inject drivers right at the start of installation, all kind of, not only block storage drivers ...
-
I never said THAT was impossible, as long the OS kernel started, you can use whatever drivers you like. I was just answering the fact PVHVM was a real mode since the start, and it's not. You can't have a PVHVM template, it doesn't make sense (unless you have already an OS installed).
That's all I said You are moving the goalposts every time.