XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    XCP-ng 8.3 updates announcements and testing

    Scheduled Pinned Locked Moved News
    362 Posts 40 Posters 139.8k Views 55 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • G Offline
      gb.123 @stormi
      last edited by gb.123

      @stormi

      Great Idea!
      Post updated ! 🙂

      Update: I also added 'Automatic Backup' which backs up your original file in case something goes wrong.

      1 Reply Last reply Reply Quote 1
      • gduperreyG Offline
        gduperrey Vates 🪐 XCP-ng Team
        last edited by gduperrey

        New security and maintenance update candidates for you to test!

        Security vulnerabilities have been detected and fixed for xen and varstored. We also publish other non-urgent updates which we had in the pipe for the next update release.

        Security updates:

        • xen:

          • XSA-477 / VSA-2026-001: A buffer overflow in the Xen shadow tracing code could allow a DomU virtual machine to crash Xen, or potentially escalate privileges.
          • XSA-479 / VSA-2026-003: Some Xen optimizations to avoid clearing internal CPU buffers when not required could allow one guest to leak data of another guest. A mitigation can be applied without the fix by rebooting vulnerable Xen with "spec-ctrl=ibpb-entry=hvm,ibpb-entry=pv" on the Xen command line at the cost of decreased performances.
        • varstored:

          • XSA-478 / VSA-2026-002: Within varstored, there were insufficient compiler barriers, creating TOCTOU issues with data in the shared buffer. An attacker with kernel level access in a VM can escalate privilege via gaining code execution within varstored.

        Maintenance updates:

        • guest-templates-json:

          • Update VM template labels
          • Sync RHEL10 template with XenServer's
        • intel-microcode:

          • Update to publicly released microcode-20251111
          • Updates for multiple functional issues
        • kernel: Bug fixes in the NFS and NBD stacks for various deadlocks and other race conditions.

        • qemu: Backport for CVE-2021-3929, fixing a DMA reentrancy flaw in NVMe emulation, that could lead to use-after-free from a malicious guest and potential arbitrary code execution.

        • smartmontools: Update to minor release 7.5

        • swtpm: Synchronize with release 0.7.3-12 from XenServer. No functional changes.

        • xapi: Fix regression on dynamic memory management during live migration, causing VMs not to balloon down before the migration.

        • xcp-ng-release: Prevent remote syslog from being overwritten by system updates.

        XOSTOR
        In addition to the changes in common packages, the following XOSTOR-specific packages received updates:

        • drbd: Reduces the I/O load and time during resync.
        • drbd-reactor: Misc improvements regarding drbd-reactor and events
        • linstor:
          • Resource delete: Fixed rare race condition where a delayed DRBD event causes "resource not found" ErrorReports
          • Misc changes to robustify LINSTOR API calls and checks

        If you are using Xostor, please refer to our documentation for the update method.

        Test on XCP-ng 8.3

        yum clean metadata --enablerepo=xcp-ng-testing,xcp-ng-candidates
        yum update --enablerepo=xcp-ng-testing,xcp-ng-candidates
        reboot
        

        The usual update rules apply: pool coordinator first, etc.

        Versions:

        • guest-templates-json: 2.0.15-1.1.xcpng8.3
        • intel-microcode: 20251029-1.xcpng8.3
        • kernel: 4.19.19-8.0.44.1.xcpng8.3
        • qemu: 4.2.1-5.2.15.2.xcpng8.3
        • smartmontools: 7.5-1.xcpng8.3
        • swtpm: 0.7.3-12.xcpng8.3
        • xapi: 25.33.1-2.3.xcpng8.3
        • xcp-ng-release: 8.3.0-36
        • xcp-python-libs: 3.0.10-1.1.xcpng8.3
        • xen: 4.17.5-23.2.xcpng8.3
        • varstored: 1.2.0-3.5.xcpng8.3

        XOSTOR

        • drbd: 9.33.0-1.el7_9
        • drbd-reactor: 1.9.0-1
        • kmod-drbd: 9.2.16-1.0.xcpng8.3
        • linstor: 1.33.0~rc.2-1.el8
        • linstor-client: 1.27.0-1.xcpng8.3
        • python-linstor: 1.27.0-1.xcpng8.3
        • xcp-ng-linstor: 1.2-4.xcpng8.3

        What to test

        Normal use and anything else you want to test.

        Test window before official release of the updates

        2 days max.

        F A P 3 Replies Last reply Reply Quote 3
        • F Offline
          flakpyro @gduperrey
          last edited by

          Installed on my usual selection of hosts. (A mixture of AMD and Intel hosts, SuperMicro, Asus, and Minisforum). No issues after a reboot, PCI Passthru, backups, etc continue to work smoothly

          1 Reply Last reply Reply Quote 4
          • A Offline
            Andrew Top contributor @gduperrey
            last edited by

            @gduperrey Standard XCP 8.3 pools updated and running.

            1 Reply Last reply Reply Quote 2
            • gduperreyG Offline
              gduperrey Vates 🪐 XCP-ng Team
              last edited by

              Thank you everyone for your tests and your feedback!

              The updates are live now: https://xcp-ng.org/blog/2026/01/29/january-2026-security-and-maintenance-updates-for-xcp-ng-8-3-lts/

              M 1 Reply Last reply Reply Quote 2
              • D dcskinner referenced this topic
              • M Offline
                manilx @gduperrey
                last edited by

                @gduperrey updated 2 hosts @home. 5 @office.
                Had to run yum clean metadata ; yum update on cli (cancelling to run RPU in XO) for updates to appear.

                1 Reply Last reply Reply Quote 2
                • P Offline
                  Pilow @gduperrey
                  last edited by Pilow

                  @gduperrey we had the XOA update alert, upgraded to XOA 6.1.0

                  but no sign of XCP hosts updates ?
                  da38c4a3-539e-483d-ac5b-93d60360d287-image.png

                  When patches are available, it usually pops up on its own, is there something to do on cli now ?

                  EDIT : my bad, we had a DNS resolution problem... I now see a bunch of updates available...

                  R 1 Reply Last reply Reply Quote 2
                  • R Offline
                    robertblissitt @Pilow
                    last edited by

                    Yesterday, from memory: Up-to-date XO (CE) said there were Pool updates available, but the three individual XCP-ng hosts showed nothing available. I went to https://xcp-ng.org/blog/tag/security/ and did not see any new patches published for January, and I feared that the previous updates from October had somehow not been fully installed. I put hosts into Maintenance Mode and rebooted them, and patches were seemingly installed as part of the reboot. I don't recall if I rebooted (and therefore patched) the Master first or not as you are supposed to do. This was a bit unsettling.

                    As of this morning, Central Time US, for our three-node XS 8.4 Pool also managed by the same XO (CE), I see patches available in XO both at the Pool level and at the Host level as expected. (Yesterday, I did not see any patches reported by XO for our XS 8.4 Pool.)

                    DanpD 1 Reply Last reply Reply Quote 0
                    • DanpD Offline
                      Danp Pro Support Team @robertblissitt
                      last edited by

                      @robertblissitt You can check /var/log/yum.log on the XCP-ng hosts to see when the updates were actually applied, but there isn't anything in a standard installation of XO / XCP-ng that would trigger an "automated" update of missing patches.

                      R 1 Reply Last reply Reply Quote 2
                      • R Offline
                        robertblissitt @Danp
                        last edited by

                        @Danp Thank you, and I could easily be misremembering how the patches got installed - I may have clicked a button (at the Host level?) to install them even though I could not see any to install. The other events I mention, however, I am more certain of.

                        1 Reply Last reply Reply Quote 0
                        • marcoiM Offline
                          marcoi
                          last edited by

                          applied latest patches to my two host pool without issue.

                          P 1 Reply Last reply Reply Quote 1
                          • P Offline
                            Pilow @marcoi
                            last edited by

                            currently having heavy issues with a production cluster of 3 hosts
                            RPU launched, all VMs except one did evacuate the Master. we managed to shutdown this VM/restart it on another host
                            we had 0b1cff9f-21d6-4912-99aa-f2223fc0f665-image.png
                            Master patch & reboot proceeded
                            Then RPU tried to evacuate a slave host and all VM are now locked we can't shutdown/hard shutdown them,
                            we have a critical VM on this host that is still running, we tried to snapshot it in case of need of hard reboot of the host, but OPERATION NOT SUPPORTED DURING AN UPGRADE
                            we manually install patches on the host without reboot and then snapshot proceeded
                            I hope this VM is secured by this snapshot...

                            ticket is open with pro support but quite stalled for now... no news since yesterday Ticket#7751752

                            1 Reply Last reply Reply Quote 0
                            • olivierlambertO Offline
                              olivierlambert Vates 🪐 Co-Founder CEO
                              last edited by

                              That's a weird one 🤔 Ping @Team-Hypervisor-Kernel

                              P 1 Reply Last reply Reply Quote 0
                              • P Offline
                                Pilow @olivierlambert
                                last edited by Pilow

                                @olivierlambert shoutout to @danp that did a takeover of the incident ticket

                                he headed me the right way to resolution of the problem, my production pool is back up & running with its VMs.

                                there was indeed a diff between what was seen by "xl list"/"xenops-cli list" and what was seen by XOA in the web ui.
                                a couple "xl destroy pid" to destroy zombie VMs, and toolstack restarts later, all is now up.

                                I don't know how the hell a simple RPU did get me in this situation though...

                                R 1 Reply Last reply Reply Quote 0
                                • olivierlambertO Offline
                                  olivierlambert Vates 🪐 Co-Founder CEO
                                  last edited by

                                  Oh wow. Indeed, that's strange. And big kudos to @danp then!!

                                  1 Reply Last reply Reply Quote 0
                                  • R Offline
                                    robertblissitt @Pilow
                                    last edited by

                                    @Pilow I've been wondering lately if I should do a Rolling Pool Reboot before any Rolling Pool Update. This might allow me to identify problems in advance and I would also be installing the patches on freshly-rebooted hosts.

                                    P 1 Reply Last reply Reply Quote 1
                                    • P Offline
                                      Pilow @robertblissitt
                                      last edited by

                                      @robertblissitt yup, afterward this seems to be a good best practice...
                                      my hosts were up for 4 month, and because of DNS resolution problem had 77 patches to catch up (80 for one with advanced telemetry enabled)

                                      a rolling reboot would have probably put in front the initial migration/evacuation problem (and subsequent zombies VMs)

                                      and no patches applied, and no pool in a semi upgraded state

                                      note to my future self, try a rolling reboot first.

                                      1 Reply Last reply Reply Quote 0
                                      • First post
                                        Last post