XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    XCP-ng 8.3 updates announcements and testing

    Scheduled Pinned Locked Moved News
    475 Posts 47 Posters 194.7k Views 63 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • acebmxerA Offline
      acebmxer @rzr
      last edited by acebmxer

      @rzr

      One thing i am noticing after this updates. Is i am seeing alot more traffic on my truenas. The gaps are me shutting down vms and starting each one to find the problem vms but it seems to be any vm. I didnt notice any performance issues just noticed the graph in truenas when its usally flat line there the occasional spike here and there not the big mess to the left in first screenshot. 5 vms running

      My xoa is on local storage on master host. and used for these test.
      All vms are powered off except xoa and xo

      Screenshot_20260425_130107.png

      Here i booted the xo vm and left it idle. The spike after is me live migrate back to vhd SR and then left idle.

      Screenshot_20260425_135258-1.png

      The gap in the middle is xo idle on vhd only SR.
      Screenshot_20260425_135604.png

      live Migrate xo back to qcow2 only SR
      Screenshot_20260425_140908.png

      Migration back to qcow2 completed
      Screenshot_20260425_143122.png

      Left xo ldle after migration to qcow2 sr.
      Screenshot_20260425_144314.png

      Again all vms booted and idle....

      Screenshot_20260425_145451.png

      From master host.
      Screenshot_20260425_150320.png

      stormiS 1 Reply Last reply Reply Quote 2
      • B Offline
        bufanda @acebmxer
        last edited by

        @acebmxer purge snapshots is active since I created the backup job over a year ago. I always enable purge snapshots on backup jobs.

        stormiS 1 Reply Last reply Reply Quote 1
        • stormiS Offline
          stormi Vates 🪐 XCP-ng Team @bufanda
          last edited by

          @bufanda I'm being told it's expected to have it the first time after the update, but in theory the next ones should be not be fulls. Can you try?

          B 1 Reply Last reply Reply Quote 0
          • stormiS Offline
            stormi Vates 🪐 XCP-ng Team @acebmxer
            last edited by

            @acebmxer with qcow2, the way we scan the SR regularly uses more I/O, so this may explain it.

            acebmxerA 1 Reply Last reply Reply Quote 0
            • acebmxerA Offline
              acebmxer @stormi
              last edited by

              @stormi said:

              @acebmxer with qcow2, the way we scan the SR regularly uses more I/O, so this may explain it.

              Thanks for the update, that this is expected. I think it its a bit a excessive being its only 4 -5 vms.

              stormiS 1 Reply Last reply Reply Quote 0
              • B Offline
                bufanda @stormi
                last edited by

                @stormi said:

                @bufanda I'm being told it's expected to have it the first time after the update, but in theory the next ones should be not be fulls. Can you try?

                Just checked and the VM I was testing with was part of two backups and it seems that when one runs and the second starts that it will fall back. I removed the VM now from one backup and with being only member of one backup job it looks good then. Will keep an eye on it.

                1 Reply Last reply Reply Quote 1
                • stormiS Offline
                  stormi Vates 🪐 XCP-ng Team @acebmxer
                  last edited by

                  @acebmxer Can you evaluate the amount of data transferred at each spike? So that we can evaluate if it's more than expected. What's the total size of the VM disks?

                  acebmxerA 1 Reply Last reply Reply Quote 0
                  • acebmxerA Offline
                    acebmxer @stormi
                    last edited by

                    @stormi

                    Left side of chart is all VMS running. 1.5gb/s each vm's vdi ranges from 128gb - 256gb allocated. Actual disk spaced used not sure)

                    screenshot_20260425_130107.png

                    The 200mb/s - 300mb/s on far right is just XO-CE running idle.
                    screenshot_20260425_144314.png

                    So if each vm is consuming 300mb/s ish times 4 -5 vms would get close to the 1.5gb/s.

                    1 Reply Last reply Reply Quote 0
                    • stormiS Offline
                      stormi Vates 🪐 XCP-ng Team
                      last edited by

                      Thanks. Ping @Team-Storage

                      acebmxerA 1 Reply Last reply Reply Quote 0
                      • acebmxerA Offline
                        acebmxer @stormi
                        last edited by acebmxer

                        @stormi said:

                        Thanks. Ping @Team-Storage

                        I think these screenshot better show the picture. It shows more or seems more dramatic on truenas.
                        That is two cloud inti vms 1 ubuntu and 1 alma, And 1 existing vm running on the qcow2 enabled SR.

                        Both SR are on the same truenas just different data sets.

                        Screenshot 2026-04-28 102057.png

                        Screenshot 2026-04-28 102110.png

                        Screenshot 2026-04-28 102128.png

                        Screenshot 2026-04-28 102142.png

                        Screenshot 2026-04-28 102150.png

                        1 Reply Last reply Reply Quote 0
                        • R Offline
                          ravenet
                          last edited by ravenet

                          One thing I noticed was manually copying a qcow2 disk to an sr, with a properly generated UUID, would lock up the sr from scanning disks and updating its inventory db.
                          Running VMs seemed fine.

                          Deleting that manually copied disk from sr released the lock.

                          Rodney

                          1 Reply Last reply Reply Quote 0
                          • stormiS Offline
                            stormi Vates 🪐 XCP-ng Team
                            last edited by stormi

                            We just published most of the updates tested above, plus embargoed security fixes:

                            https://xcp-ng.org/blog/2026/04/28/april-2026-security-and-maintenance-updates-for-xcp-ng-8-3-lts/

                            The release of the QCOW2 image format feature (packages sm, sm-fairlock and blktap) is planned in the coming days. You can still update a system which has these test packages with the security updates published today.

                            Thanks everyone for the tests!

                            M 1 Reply Last reply Reply Quote 2
                            • M Online
                              manilx @stormi
                              last edited by

                              @stormi Updated 2 pools @office but on both RPU failed after updating master and emptying secondary host. Had to install patches manually and then move VM's back......

                              1 Reply Last reply Reply Quote 1

                              Hello! It looks like you're interested in this conversation, but you don't have an account yet.

                              Getting fed up of having to scroll through the same posts each visit? When you register for an account, you'll always come back to exactly where you were before, and choose to be notified of new replies (either via email, or push notification). You'll also be able to save bookmarks and upvote posts to show your appreciation to other community members.

                              With your input, this post could be even better 💗

                              Register Login
                              • First post
                                Last post