XCP-ng 8.3 updates announcements and testing
-
-
Attached. Please rename it as tgz and extract it as I couldn't uploaded as an archive file.

Strange thing the disks don't appear in xen orchestra but they are on the drive:
[14:04 xcp-ng-akz zfs]# ls -l
total 34863861
-rw-r--r-- 1 root root 393216 Dec 10 11:19 2b94bb8f-b44d-4c3d-9844-0b2c80e7d11c.qcow2
-rw-r--r-- 1 root root 16969367552 Dec 17 09:15 37c89d4e-93d0-4f47-a340-4add9fb91307.qcow2
-rw-r--r-- 1 root root 5435228160 Dec 16 18:41 67d7cb86-864b-4bfc-9ec6-f54dbb9c9f45.qcow2
-rw-r--r-- 1 root root 10212737024 Dec 17 09:37 740d3e10-ebc9-42a3-bc7c-849f6bcc0e61.qcow2
-rw-r--r-- 1 root root 2685730816 Dec 16 14:52 76dc4b94-ad88-4514-87ef-99357b93daaf.qcow2
-rw-r--r-- 1 root root 197408 Dec 10 11:19 8158436c-327a-4dcf-ba49-56e73006ed66.qcow2
-rw-r--r-- 1 root root 11897602048 Dec 17 10:09 e219112b-73b7-46a4-8fcb-4ee8810b3625.qcow2
-rw-r--r-- 1 root root 11566120960 Dec 10 09:51 f5d157cb-39df-482b-a39d-432a90d60e89.qcow2
-rw-r--r-- 1 root root 1984 Dec 10 11:02 filelog.txt[14:07 xcp-ng-akz zfs]# zfs list
NAME USED AVAIL REFER MOUNTPOINT
ZFS_Pool 33.3G 416G 33.2G /mnt/zfs
[14:07 xcp-ng-akz zfs]# zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
ZFS_Pool 464G 33.3G 431G - - 5% 7% 1.00x ONLINE -
[14:07 xcp-ng-akz zfs]# zpool status
pool: ZFS_Pool
state: ONLINE
config:NAME STATE READ WRITE CKSUM ZFS_Pool ONLINE 0 0 0 sda ONLINE 0 0 0errors: No known data errors
-
@ovicz Hello,
From what I saw in your logs, you have a non QCOW2
smversion, it made the QCOW2 VDIs not available to the storage stack and the XAPI lost them.
If you update again while enabling the QCOW2 repo:yum update --enablerepo=xcp-ng-testing,xcp-ng-candidates,xcp-ng-qcow2A SR scan will make the VDI available to the XAPI. Though you will have to identify them and connect them to the VM manually, since this information was lost.
-
I added a warning to my initial announcement.
-
They appear now. I will try to identify them manually. Thanks for the tip.
-
Thanks for your feedback

-
Did the three hosts in my lab pool, nothing blew up so I guess that's good. Just nfs storage with a few windows VMs and a Debian 13 for XO from sources.
I think everything is now an efi boot, but no secure boot machines.
-
Working on my production system today and I noticed something new.
Three hosts in a pool, doing a Rolling Pool Update.
I'm seeing VMs migrate to both available hosts to speed things up, this is not the actions I've seen in the past. Just an interesting thing to see all three host go yellow while it is migrating.
OK, only happened to evac the third host, evac second host was back to the normal move everything to the same host (#3).
And not sure why, but the process start to finish on host 1 was faster than the other two, host 1 is coordinator.
Also of note, there seems to be no place to do a RPU from within XO6.
-
Thank you everyone for your tests and your feedback!
The updates are live now: https://xcp-ng.org/blog/2025/12/18/december-2025-security-and-maintenance-updates-for-xcp-ng-8-3-lts/
-
updates done on my two main servers and one dev box i happen to power on today. so far so good.
PS: Any way to get the following included on the next update for networking? I need it to run a scenario with opnsense vm. right now i have a script i run manually after rebooting the server.
ovs-ofctl add-flow xenbr3 "table=0, dl_dst=01:80:c2:00:00:03, actions=flood"
thanks
