XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    XCP-ng 8.3 updates announcements and testing

    Scheduled Pinned Locked Moved News
    331 Posts 36 Posters 116.9k Views 54 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • O Offline
      ovicz @stormi
      last edited by ovicz

      @stormi daemon.txt

      Attached. Please rename it as tgz and extract it as I couldn't uploaded as an archive file.

      Screenshot from 2025-12-17 14-04-50.png

      Strange thing the disks don't appear in xen orchestra but they are on the drive:

      [14:04 xcp-ng-akz zfs]# ls -l
      total 34863861
      -rw-r--r-- 1 root root 393216 Dec 10 11:19 2b94bb8f-b44d-4c3d-9844-0b2c80e7d11c.qcow2
      -rw-r--r-- 1 root root 16969367552 Dec 17 09:15 37c89d4e-93d0-4f47-a340-4add9fb91307.qcow2
      -rw-r--r-- 1 root root 5435228160 Dec 16 18:41 67d7cb86-864b-4bfc-9ec6-f54dbb9c9f45.qcow2
      -rw-r--r-- 1 root root 10212737024 Dec 17 09:37 740d3e10-ebc9-42a3-bc7c-849f6bcc0e61.qcow2
      -rw-r--r-- 1 root root 2685730816 Dec 16 14:52 76dc4b94-ad88-4514-87ef-99357b93daaf.qcow2
      -rw-r--r-- 1 root root 197408 Dec 10 11:19 8158436c-327a-4dcf-ba49-56e73006ed66.qcow2
      -rw-r--r-- 1 root root 11897602048 Dec 17 10:09 e219112b-73b7-46a4-8fcb-4ee8810b3625.qcow2
      -rw-r--r-- 1 root root 11566120960 Dec 10 09:51 f5d157cb-39df-482b-a39d-432a90d60e89.qcow2
      -rw-r--r-- 1 root root 1984 Dec 10 11:02 filelog.txt

      [14:07 xcp-ng-akz zfs]# zfs list
      NAME USED AVAIL REFER MOUNTPOINT
      ZFS_Pool 33.3G 416G 33.2G /mnt/zfs
      [14:07 xcp-ng-akz zfs]# zpool list
      NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
      ZFS_Pool 464G 33.3G 431G - - 5% 7% 1.00x ONLINE -
      [14:07 xcp-ng-akz zfs]# zpool status
      pool: ZFS_Pool
      state: ONLINE
      config:

      NAME        STATE     READ WRITE CKSUM
      ZFS_Pool    ONLINE       0     0     0
        sda       ONLINE       0     0     0
      

      errors: No known data errors

      dthenotD 1 Reply Last reply Reply Quote 0
      • dthenotD Offline
        dthenot Vates 🪐 XCP-ng Team @ovicz
        last edited by dthenot

        @ovicz Hello,

        From what I saw in your logs, you have a non QCOW2 sm version, it made the QCOW2 VDIs not available to the storage stack and the XAPI lost them.
        If you update again while enabling the QCOW2 repo:

        yum update --enablerepo=xcp-ng-testing,xcp-ng-candidates,xcp-ng-qcow2
        

        A SR scan will make the VDI available to the XAPI. Though you will have to identify them and connect them to the VM manually, since this information was lost.

        O 1 Reply Last reply Reply Quote 2
        • stormiS Offline
          stormi Vates 🪐 XCP-ng Team
          last edited by

          I added a warning to my initial announcement.

          1 Reply Last reply Reply Quote 1
          • O Offline
            ovicz @dthenot
            last edited by

            @dthenot Screenshot from 2025-12-17 14-21-43.png

            They appear now. I will try to identify them manually. Thanks for the tip.

            1 Reply Last reply Reply Quote 1
            • olivierlambertO Offline
              olivierlambert Vates 🪐 Co-Founder CEO
              last edited by

              Thanks for your feedback 🙂

              1 Reply Last reply Reply Quote 0
              • G Offline
                Greg_E
                last edited by

                Did the three hosts in my lab pool, nothing blew up so I guess that's good. Just nfs storage with a few windows VMs and a Debian 13 for XO from sources.

                I think everything is now an efi boot, but no secure boot machines.

                1 Reply Last reply Reply Quote 1
                • G Offline
                  Greg_E
                  last edited by

                  Working on my production system today and I noticed something new.

                  Three hosts in a pool, doing a Rolling Pool Update.

                  I'm seeing VMs migrate to both available hosts to speed things up, this is not the actions I've seen in the past. Just an interesting thing to see all three host go yellow while it is migrating.

                  OK, only happened to evac the third host, evac second host was back to the normal move everything to the same host (#3).

                  And not sure why, but the process start to finish on host 1 was faster than the other two, host 1 is coordinator.

                  Also of note, there seems to be no place to do a RPU from within XO6.

                  1 Reply Last reply Reply Quote 0
                  • stormiS Offline
                    stormi Vates 🪐 XCP-ng Team
                    last edited by

                    Thank you everyone for your tests and your feedback!

                    The updates are live now: https://xcp-ng.org/blog/2025/12/18/december-2025-security-and-maintenance-updates-for-xcp-ng-8-3-lts/

                    A 1 Reply Last reply Reply Quote 3
                    • marcoiM Offline
                      marcoi
                      last edited by

                      updates done on my two main servers and one dev box i happen to power on today. so far so good.

                      PS: Any way to get the following included on the next update for networking? I need it to run a scenario with opnsense vm. right now i have a script i run manually after rebooting the server.

                      ovs-ofctl add-flow xenbr3 "table=0, dl_dst=01:80:c2:00:00:03, actions=flood"

                      thanks

                      1 Reply Last reply Reply Quote 0
                      • A Offline
                        acebmxer @stormi
                        last edited by

                        @stormi

                        In regards to UEFI Secure boot in recent update.

                        from pool master host.

                        [19:09 xcp-ng-qhfpcnmb ~]# rpm -q varstored
                        varstored-1.2.0-3.4.xcpng8.3.x86_64
                        
                        8.3 with varstored >= 1.2.0-3.4
                        Secure Boot is ready to use on new VMs without extra configuration. Simply activate Secure Boot on your VMs, and they will be provided with an appropriate set of default Secure Boot variables.
                        
                        We will keep updating the default Secure Boot variables with future updates from Microsoft. If you don't want this behavior, you can lock in these variables by using the Manually Install the Default UEFI Certificates procedure.
                        

                        So new vms nothing is needed to be done. But what about existing vms windows or linux? It it was stated I apologize if i missed it.

                        D 1 Reply Last reply Reply Quote 0
                        • D Offline
                          dinhngtu Vates 🪐 XCP-ng Team @acebmxer
                          last edited by

                          @acebmxer The Recommended actions section of the guest Secure Boot docs has been updated with our latest recommendations. In short, VMs existing prior to the varstored update will need to have their Secure Boot certificates updated with the Propagate certificates button.

                          A 1 Reply Last reply Reply Quote 0
                          • A Offline
                            acebmxer @dinhngtu
                            last edited by

                            @dinhngtu thank must have read that part with my eyes closed or something. 🤦

                            1 Reply Last reply Reply Quote 0
                            • First post
                              Last post