XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    XCP-ng 8.3 updates announcements and testing

    Scheduled Pinned Locked Moved News
    328 Posts 36 Posters 116.7k Views 55 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • O Offline
      ovicz @ronan-a
      last edited by

      @ronan-a SMlog.txt
      Attached. Please rename it as tgz and extract it as I couldn't uploaded as an archive file.

      1 Reply Last reply Reply Quote 0
      • O Offline
        ovicz @stormi
        last edited by ovicz

        @stormi daemon.txt

        Attached. Please rename it as tgz and extract it as I couldn't uploaded as an archive file.

        Screenshot from 2025-12-17 14-04-50.png

        Strange thing the disks don't appear in xen orchestra but they are on the drive:

        [14:04 xcp-ng-akz zfs]# ls -l
        total 34863861
        -rw-r--r-- 1 root root 393216 Dec 10 11:19 2b94bb8f-b44d-4c3d-9844-0b2c80e7d11c.qcow2
        -rw-r--r-- 1 root root 16969367552 Dec 17 09:15 37c89d4e-93d0-4f47-a340-4add9fb91307.qcow2
        -rw-r--r-- 1 root root 5435228160 Dec 16 18:41 67d7cb86-864b-4bfc-9ec6-f54dbb9c9f45.qcow2
        -rw-r--r-- 1 root root 10212737024 Dec 17 09:37 740d3e10-ebc9-42a3-bc7c-849f6bcc0e61.qcow2
        -rw-r--r-- 1 root root 2685730816 Dec 16 14:52 76dc4b94-ad88-4514-87ef-99357b93daaf.qcow2
        -rw-r--r-- 1 root root 197408 Dec 10 11:19 8158436c-327a-4dcf-ba49-56e73006ed66.qcow2
        -rw-r--r-- 1 root root 11897602048 Dec 17 10:09 e219112b-73b7-46a4-8fcb-4ee8810b3625.qcow2
        -rw-r--r-- 1 root root 11566120960 Dec 10 09:51 f5d157cb-39df-482b-a39d-432a90d60e89.qcow2
        -rw-r--r-- 1 root root 1984 Dec 10 11:02 filelog.txt

        [14:07 xcp-ng-akz zfs]# zfs list
        NAME USED AVAIL REFER MOUNTPOINT
        ZFS_Pool 33.3G 416G 33.2G /mnt/zfs
        [14:07 xcp-ng-akz zfs]# zpool list
        NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
        ZFS_Pool 464G 33.3G 431G - - 5% 7% 1.00x ONLINE -
        [14:07 xcp-ng-akz zfs]# zpool status
        pool: ZFS_Pool
        state: ONLINE
        config:

        NAME        STATE     READ WRITE CKSUM
        ZFS_Pool    ONLINE       0     0     0
          sda       ONLINE       0     0     0
        

        errors: No known data errors

        dthenotD 1 Reply Last reply Reply Quote 0
        • dthenotD Offline
          dthenot Vates 🪐 XCP-ng Team @ovicz
          last edited by dthenot

          @ovicz Hello,

          From what I saw in your logs, you have a non QCOW2 sm version, it made the QCOW2 VDIs not available to the storage stack and the XAPI lost them.
          If you update again while enabling the QCOW2 repo:

          yum update --enablerepo=xcp-ng-testing,xcp-ng-candidates,xcp-ng-qcow2
          

          A SR scan will make the VDI available to the XAPI. Though you will have to identify them and connect them to the VM manually, since this information was lost.

          O 1 Reply Last reply Reply Quote 2
          • stormiS Offline
            stormi Vates 🪐 XCP-ng Team
            last edited by

            I added a warning to my initial announcement.

            1 Reply Last reply Reply Quote 1
            • O Offline
              ovicz @dthenot
              last edited by

              @dthenot Screenshot from 2025-12-17 14-21-43.png

              They appear now. I will try to identify them manually. Thanks for the tip.

              1 Reply Last reply Reply Quote 1
              • olivierlambertO Offline
                olivierlambert Vates 🪐 Co-Founder CEO
                last edited by

                Thanks for your feedback 🙂

                1 Reply Last reply Reply Quote 0
                • G Offline
                  Greg_E
                  last edited by

                  Did the three hosts in my lab pool, nothing blew up so I guess that's good. Just nfs storage with a few windows VMs and a Debian 13 for XO from sources.

                  I think everything is now an efi boot, but no secure boot machines.

                  1 Reply Last reply Reply Quote 1
                  • G Offline
                    Greg_E
                    last edited by

                    Working on my production system today and I noticed something new.

                    Three hosts in a pool, doing a Rolling Pool Update.

                    I'm seeing VMs migrate to both available hosts to speed things up, this is not the actions I've seen in the past. Just an interesting thing to see all three host go yellow while it is migrating.

                    OK, only happened to evac the third host, evac second host was back to the normal move everything to the same host (#3).

                    And not sure why, but the process start to finish on host 1 was faster than the other two, host 1 is coordinator.

                    Also of note, there seems to be no place to do a RPU from within XO6.

                    1 Reply Last reply Reply Quote 0
                    • stormiS Offline
                      stormi Vates 🪐 XCP-ng Team
                      last edited by

                      Thank you everyone for your tests and your feedback!

                      The updates are live now: https://xcp-ng.org/blog/2025/12/18/december-2025-security-and-maintenance-updates-for-xcp-ng-8-3-lts/

                      1 Reply Last reply Reply Quote 3
                      • marcoiM Offline
                        marcoi
                        last edited by

                        updates done on my two main servers and one dev box i happen to power on today. so far so good.

                        PS: Any way to get the following included on the next update for networking? I need it to run a scenario with opnsense vm. right now i have a script i run manually after rebooting the server.

                        ovs-ofctl add-flow xenbr3 "table=0, dl_dst=01:80:c2:00:00:03, actions=flood"

                        thanks

                        1 Reply Last reply Reply Quote 0
                        • First post
                          Last post