@archw I can confirm. That is exactly the behaviour I see with my Windows VMs.
Posts
-
RE: XSA-468: multiple Windows PV driver vulnerabilities - update now!
-
RE: XSA-468: multiple Windows PV driver vulnerabilities - update now!
@dinhngtu Thanks for the pointer. Yes, it seems that the root cause also makes routes disappear. Howerver, that the routing information is gone is sadly not mentioned explicitly. May be something to add to your docs as well.
Caution when updating tools: Verify interface IP configuration and routing entries.
-
RE: XSA-468: multiple Windows PV driver vulnerabilities - update now!
Just did a couple more tests. Here are my findings:
- Upgrading the tools from v9.3.3 to v9.4.1 does preserve the routing table.
- Upgrading the tools from v9.2.1 to v9.4.1 does not preserve the routing table.
Here are a couple of powershell commands used for testing:
Get-NetRoute -PolicyStore PersistentStore Get-NetAdapter New-NetRoute -DestinationPrefix "10.10.0.0/24" -InterfaceIndex <ifIndex> -NextHop 10.10.0.254
-
RE: XSA-468: multiple Windows PV driver vulnerabilities - update now!
@pdonias Sure thing. I can test it in my test environment.
-
RE: XSA-468: multiple Windows PV driver vulnerabilities - update now!
@DustinB Not IP assignments, I am talking about static routes. See e.g. https://learn.microsoft.com/en-us/powershell/module/nettcpip/get-netroute
-
RE: XSA-468: multiple Windows PV driver vulnerabilities - update now!
Here is another interesting fact: After installing the new tools (v.9.4.1) my static routes in Windows were all gone.
Definitively a good way to loose connectivity to your domain controller. And that's why you have good monitoring and store things in Ansible et al. ...
-
RE: XSA-468: multiple Windows PV driver vulnerabilities - update now!
@dinhngtu On the machine where it worked, the option "Manage Citrix PV drivers via Windows Update" was not enabled. Seems that my older BIOS Windows 10 VMs have that option enabled. On all UEFI VMs the options is disabled.
As I wanted to go and check that is present in the templates, I realized that the Windows Templates are gone from Xen Orchestra v5.106.4???
-
RE: XSA-468: multiple Windows PV driver vulnerabilities - update now!
On another Windows 10 host it worked. What was different: I saw the message box "Tools have been installed successfully". May be that makes a difference?
-
RE: XSA-468: multiple Windows PV driver vulnerabilities - update now!
@dinhngtu On a Windows 10 VM rebooting alone did not do the trick. After 5 reboots the script still reports vulnerable devices:
.\Install-XSA468Workaround.ps1 -Scan Looking for vulnerable XenIface objects Found vulnerable object XENBUS\VEN_XSC000&DEV_IFACE\_ Found vulnerable object XENBUS\VEN_XSC000&DEV_IFACE\_ Looking for vulnerable XenIface WMI GUIDs Found vulnerable WMI GUID 1D80EB99-A1D6-4492-B62F-8B4549FF0B5E Found vulnerable WMI GUID 12138A69-97B2-49DD-B9DE-54749AABC789 Found vulnerable WMI GUID AB8136BF-8EA7-420D-ADAD-89C83E587925 Found XenIface vulnerability, it's recommended to run the script True
Running
.\Install-XSA468Workaround.ps1
works as expected. After another reboot nothing is reported as being vulnerable anymore.On a Windows 2019 Server I saw the behaviour you described: Installing the tools and a reboot was enough.
-
RE: XSA-468: multiple Windows PV driver vulnerabilities - update now!
@dinhngtu Ok, I will keep that in mind as I go through all the VMs. As I currently cannot update XCP-ng on all hosts (8.2.1 LTS), the VMs where the new tools were installed and mitigations applied show up as "orange".
On a XCP-ng 8.3 test hosts with all updates applied the detection works as advertised.
-
RE: XSA-468: multiple Windows PV driver vulnerabilities - update now!
After upgrading the VM tools the mitigation script still shows vulnerable devices on some hosts. After running the script to apply the mitigations, nothing is reported as being vulnerable anymore.
Thus the question: Is applying the mitigations a necessary action as well? Or does installing the v9.4.1 tools fix the vulnerability?
-
RE: VM migration seems to have cleared VM secure boot state
@stormi I have seen this state clearing for two VM migrations on the same pool. The hardware on all machines is identical and the migration is from one machine to the other, so no cross-pool migration involved.
What we have also observed after the fact, is that in Xen orchestra it states that the VM has been created on the day of the migration, not the day the VM was actually created. So it seems as it was indeed "re-created" after the migration.
For "failed" machines it says:
Created by Unknown on 2024-08-25 18:11 with template Windows Server 2019 (64-bit)
For machines which were not migration but created at the same instant:
Created by Unknown on 2022-01-07 10:09 with template Windows Server 2019 (64-bit)
-
RE: VM migration seems to have cleared VM secure boot state
@stormi This did not work on a test system. The command simply errored out.
-
RE: VM migration seems to have cleared VM secure boot state
Thanks for your insights. After a lot of trial and error we were able to get the VM back online with secure boot enabled. The recovery was as follows:
- Disable secure boot in Xen orchestra for the VM
- Boot Windows without secure boot
- Drop into the UEFI firmware settings via
shutdown /f /r /o /t 0
and selectingTroubleshoot
->Advanced Options
->UEFI Firmware Settings
- Then select
Boot Maintenance Manager
->Boot from File
- Select the right volume and browse to
EFI\Microsoft\Boot
- Select
SecureBootRecovery.efi
and hit enter to start the program, this will re-apply the certificate "Windows UEFI CA 2023" to the secure boot DB
-
VM migration seems to have cleared VM secure boot state
We are hitting a rather interesting case.
In a production environment (XCP-ng 8.2.1) the secure boot changes according to KB5025885 were implemented on a Windows Server 2019 VM (this changed the VM DB and KEK) back in June. After the changes were completed, the VM got live migrated from one pool node to the other without a reboot a month or so later.
For some reason, this seems to have cleared the Secure Boot state of the VM and probably applied the pool's default entries again. Because a subsequent reboot another month later landed the VM in the UEFI shell. After endless hours of debugging this makes sense, since the new Windows bootloader is signed by a certificate that XCP-ng does not know about.
Disabling Secure Boot on the VM allows it to start. So we can get the following output from it:
[System.Text.Encoding]::ASCII.GetString((Get-SecureBootUEFI db).bytes) -match 'Windows UEFI CA 2023' False [System.Text.Encoding]::ASCII.GetString((Get-SecureBootUEFI dbx).bytes) -match 'Microsoft Windows Production PCA 2011' True
Does anyone have any insights into how to re-enable secure boot on such a VM again? One option is probably to include the UEFI 2023 DB / KEK entries as described here: https://github.com/xcp-ng/uefistored/issues/52.
Other suggestions are more than welcome.
References:
- https://support.microsoft.com/en-us/topic/kb5025885-how-to-manage-the-windows-boot-manager-revocations-for-secure-boot-changes-associated-with-cve-2023-24932-41a975df-beb2-40c1-99a3-b3ff139f832d
- https://learn.microsoft.com/en-us/windows-hardware/manufacture/desktop/windows-secure-boot-key-creation-and-management-guidance?view=windows-11