XCP-ng 8.3 updates announcements and testing
-
@stormi My "test/production" system, an HP DL165 is updated and running normally with the updated updates. Not seeing any change with secure boot VMs at all, i.e. working just fine.
-
New update candidates for you to test! (adding to the previous batch again)
New updates join the previous batch of update candidates. They're the last ones.
A new XSA (Xen Security Advisory) was published on the 21th of October, and updates to Xen address the disclosed vulnerabilities. We also reverted a change in XAPI that we deemed risky.
Additionally, we also publish an updated Intel-Ice alternate driver.
-
xen:- XSA-475 - Potential risks include Denial of Service (DoS) impacting the whole host, information exposure, or escalation of privileges. There are two vulnerabilities related to hypercalls in the Viridian code:
- CVE-2025-58147: Out-of-bounds write in vpmask_set() from hypercalls using the HV_VP_SET Sparse format.
- CVE-2025-58148: Out-of-bound read in send_ipi() from hypercalls using any format, that could lead to a wild vCPU pointer.
- XSA-475 - Potential risks include Denial of Service (DoS) impacting the whole host, information exposure, or escalation of privileges. There are two vulnerabilities related to hypercalls in the Viridian code:
-
xapi:- We reverted a change related to how rsyslog configuration is handled. The way XenServer handled the change seemed risky to us, we'll take the time to make it in a safer way.
Optional packages:
- Alternate Driver: Updated to newer version.
intel-ice-alt: Update driver sources to v1.17.2
Test on XCP-ng 8.3
yum clean metadata --enablerepo=xcp-ng-testing,xcp-ng-candidates yum update --enablerepo=xcp-ng-testing,xcp-ng-candidates rebootThe usual update rules apply: pool coordinator first, etc.
Versions:
xapi: 25.27.0-2.2.xcpng8.3xen: 4.17.5-20.2.xcpng8.3
Optional packages:
- Alternate drivers:
intel-ice-alt: 1.17.2-1.xcpng8.3
What to test
Normal use and anything else you want to test.
Test window before official release of the updates
~2 days.
-
-
@gduperrey Works on my play-/homelab (HP ProDesk 600 G6, Dell Optiplex 9010). Can't update my Dell R720s GPU cluster at the moment, though.
-
@gduperrey This is working without problems on my test lab system, an HP Proliant DL165 G7
-
Updates published: https://xcp-ng.org/blog/2025/10/23/october-2025-security-and-maintenance-update-for-xcp-ng-8-3-lts/
Thank you for the tests!
-
@gduperrey Updated 2 pools/2 servers @home, updated 3 pools/5 servers @ office with RPU.
All fine. -
Updates 3 pools. All fine except for the last host on the last pool ran into a yum mirror error and failed. Manually running yum update and rebooting the host worked. Sadly i then had to move VMs back around since the RPU failed and the process of migrating VMs back did not kick off as a result.
Error was
One of the configured repositories failed (Unknown), and yum doesn't have enough cached data to continue. At this point the only safe thing yum can do is fail. There are a few ways to work "fix" this: 1. Contact the upstream for the repository and get them to fix the problem. 2. Reconfigure the baseurl/etc. for the repository, to point to a working upstream. This is most often useful if you are using a newer distribution release than is supported by the repository (and the packages for the previous distribution release still work). 3. Run the command with the repository temporarily disabled yum --disablerepo=<repoid> ... 4. Disable the repository permanently, so yum won't use it by default. Yum will then just ignore the repository until you permanently enable it again or use --enablerepo for temporary usage: yum-config-manager --disable <repoid> or subscription-manager repos --disable=<repoid> 5. Configure the failing repository to be skipped, if it is unavailable. Note that yum will try to contact the repo. when it runs most commands, so will have to try and fail each time (and thus. yum will be be much slower). If it is a very temporary problem though, this is often a nice compromise: yum-config-manager --save --setopt=<repoid>.skip_if_unavailable=true xcp-ng-base: Check uncompressed DB failed -
Updated AMD Ryzen pool at home and update two Intel Dell r660 and r640 pools at work. No issues to report back.
-
updated my two hosts without issues.
-
IMPORTANT NOTICE!After publishing the updates, we discovered a very nasty bug when using the UEFI certificates that we distribute. Long story short, they're too big, and there's only limited space (57K), and combined to a preexisting bug in varstored, this will cause the VM to stop booting after Windows or any other OS attempts to append to the DBX (revocation database).
We pulled the
varstoredupdate, but those who updated can be affected.There are conditions for the issue:
- Existing VMs are not affected, unless you propagated the new certs to them
- New VMs are affected only if you never installed UEFI certs to the pool yourself (through XOA or
secureboot-certs install), or cleared them usingsecureboot-certs clearin order to use our default certificates.
If you have the affected version of
varstored(rpm -q varstoredyieldsvarstored-1.2.0-3.1.xcpng8.3) :- on every host, downgrade it with
yum downgrade varstored-1.2.0-2.3.xcpng8.3. No reboot or toolstack restart required. - if you have affected UEFI VMs, that is VMs that meet the conditions above but are not broken yet, don't install updates, turn them off, and fix them by deleting their DBX database: https://docs.xcp-ng.org/guides/guest-UEFI-Secure-Boot/#remove-certificates-from-a-vm. This has to be done when the VM is off. Your OS will add its own DBX afterwards.
- If you already have broken VMs (this warning reaching you too late), revert to a snapshot or backup. Other ways to fix them will require a patched
varstoredcurrently in the making.
-
@stormi Downgraded all hosts @home and @office. I have not done anything to running Windows Server VM's or touch anything regarding certs. So I guess I'm all good.
-
@flakpyro Thanks for letting us know. I suppose there was a mirror that was not ready yet, or had a transient issue, and unfortunately XOA's rolling pool update feature is not very resilient to that at the moment.
-
I reverted the package however i initially followed the directions provided by vates in the release blog post and ran "secureboot-certs clear" then on each VM with Secure boot enabled i clicked "Copy the pools default UEFI Certificates to the VM".
After reverting the updates and running secureboot-certs install again i went back and clicked "Copy the pools default UEFI Certificates to the VM" again thinking it would put the old certs back.
It sounds like this may not be enough and i need to remove the dbx record from each of these VMs. Am i correct or was that enough to fix these VMs?
Per the docs:
varstore-rm <vm-uuid> d719b2cb-3d3a-4596-a3bc-dad00e67656f dbx Note that the GUID may be found by using varstore-ls <vm-uuid>.When i run the command i see
Exmaple:
varstore-ls f9166a11-3c3f-33f1-505c-542ce8e1764d 8be4df61-93ca-11d2-aa0d-00e098032b8c SecureBoot 8be4df61-93ca-11d2-aa0d-00e098032b8c DeployedMode 8be4df61-93ca-11d2-aa0d-00e098032b8c AuditMode 8be4df61-93ca-11d2-aa0d-00e098032b8c SetupMode 8be4df61-93ca-11d2-aa0d-00e098032b8c SignatureSupport 8be4df61-93ca-11d2-aa0d-00e098032b8c PK 8be4df61-93ca-11d2-aa0d-00e098032b8c KEK d719b2cb-3d3a-4596-a3bc-dad00e67656f db d719b2cb-3d3a-4596-a3bc-dad00e67656f dbx 605dab50-e046-4300-abb6-3dd810dd8b23 SbatLevel fab7e9e1-39dd-4f2b-8408-e20e906cb6de HDDP e20939be-32d4-41be-a150-897f85d49829 MemoryOverwriteRequestControl bb983ccf-151d-40e1-a07b-4a17be168292 MemoryOverwriteRequestControlLock 9d1947eb-09bb-4780-a3cd-bea956e0e056 PPIBuffer 9d1947eb-09bb-4780-a3cd-bea956e0e056 Tcg2PhysicalPresenceFlagsLock eb704011-1402-11d3-8e77-00a0c969723b MTC 8be4df61-93ca-11d2-aa0d-00e098032b8c Boot0000 8be4df61-93ca-11d2-aa0d-00e098032b8c Timeout 8be4df61-93ca-11d2-aa0d-00e098032b8c Lang 8be4df61-93ca-11d2-aa0d-00e098032b8c PlatformLang 8be4df61-93ca-11d2-aa0d-00e098032b8c ConIn 8be4df61-93ca-11d2-aa0d-00e098032b8c ConOut 8be4df61-93ca-11d2-aa0d-00e098032b8c ErrOut 9d1947eb-09bb-4780-a3cd-bea956e0e056 Tcg2PhysicalPresenceFlags 8be4df61-93ca-11d2-aa0d-00e098032b8c Key0000 8be4df61-93ca-11d2-aa0d-00e098032b8c Key0001 5b446ed1-e30b-4faa-871a-3654eca36080 0050569B1890 937fe521-95ae-4d1a-8929-48bcd90ad31a 0050569B1890 9fb9a8a1-2f4a-43a6-889c-d0f7b6c47ad5 ClientId 8be4df61-93ca-11d2-aa0d-00e098032b8c Boot0003 8be4df61-93ca-11d2-aa0d-00e098032b8c Boot0004 8be4df61-93ca-11d2-aa0d-00e098032b8c Boot0005 4c19049f-4137-4dd3-9c10-8b97a83ffdfa MemoryTypeInformation 8be4df61-93ca-11d2-aa0d-00e098032b8c Boot0006 8be4df61-93ca-11d2-aa0d-00e098032b8c BootOrder 8c136d32-039a-4016-8bb4-9e985e62786f SecretKey 8be4df61-93ca-11d2-aa0d-00e098032b8c Boot0001 8be4df61-93ca-11d2-aa0d-00e098032b8c Boot0002So the command would be:
varstore-rm f9166a11-3c3f-33f1-505c-542ce8e1764d d719b2cb-3d3a-4596-a3bc-dad00e67656f dbx correct? Does "d719b2cb-3d3a-4596-a3bc-dad00e67656f " indicate the old certs have been re-installed? -
@flakpyro It's in the middle. But the "d719b2cb-3d3a-4596-a3bc-dad00e67656f dbx" part is always the same across all VMs.
After reverting the updates and running secureboot-certs install again
This will create a pool-level dbx which does not have the problem seen in varstored-1.2.0-3.1. So as long as you propagated the variables to all affected VMs without any errors, there's no need to delete them manually. But you can always delete them again if you're not sure.
-
@dinhngtu Ah okay, i was wondering if "d719b2cb-3d3a-4596-a3bc-dad00e67656f" indicated i was back on the safe certs since it is the same on all VMs since reverting and clicking "Copy the pools default UEFI Certificates to the VM"
So i need to run
varstore-rm f9166a11-3c3f-33f1-505c-542ce8e1764d d719b2cb-3d3a-4596-a3bc-dad00e67656f dbxwhile powered off to be safe?
-
Awesome thanks for the response. I took a snapshot and tried rebooting a VM and it booted back up without issue simply by clicking the propagate button on each affecting VM after reverting and running "secureboot-certs install"
-
-
@acebmxer Looks like it. You can recover by installing a preliminary test update here:
wget https://koji.xcp-ng.org/repos/user/8/8.3/xcpng-users.repo -O /etc/yum.repos.d/xcpng-users.repo yum update --enablerepo=xcp-ng-ndinh1 varstored varstored-tools secureboot-certs clearThen propagate SB certs to make sure that subsequent dbx updates will run normally.
Sorry for the inconvenience.
-
I was just getting ready to go update my pool... So is it safe now? And do these certs only affect Secure Boot VMs or UEFI VMs as well?
-
@dinhngtu Thank you for that reply.
Just to clarify I have 4 host. 2 host i updated to latest update once they became available to public via pool updates. The other 2 host I had to push the second beta release to get command secureboot-certs install to work. Then I pushed the rest of the update again once they became public.
So far im only seeing 1 vm having the issue on the later 2 host the had the beta updates installed. There that test PC and 1 windows server (Printsrver), xoproxy, and xoce.
Should i run those commands on all 4 host or just the later 2 with the troubled vm?

