XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    XCP-ng 8.3 updates announcements and testing

    Scheduled Pinned Locked Moved News
    463 Posts 47 Posters 191.6k Views 62 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • A Offline
      acebmxer
      last edited by

      @rzr

      I thought i pressed save after editing the backup job to enable the purge snapshot when using cbt.

      After re-enabling and clicking save. All is good now. No errors no stuck coalescence after backups.

      1 Reply Last reply Reply Quote 2
      • rzrR Offline
        rzr Vates 🪐 XCP-ng Team
        last edited by

        This post is deleted!
        1 Reply Last reply Reply Quote 0
        • M Offline
          MajorP93
          last edited by

          I updated my test environment and performed a few tests:

          • migrating VMs back and forth between VHD based NFS SR and QCOW2 based iSCSI SR --> VMs got converted between VHD and QCOW2 just fine, live migration worked
          • creation of rather big QCOW2 based VMs (2.5+ TB)
          • NBD-enabled delta backups of a mixed set of VMs (small, big, QCOW2, VHD)

          All tests worked fine so far. Only thing that I noticed: When converting VHD-based VMs to QCOW2 format I was not able to storage migrate more than 2 VMs at a time. XO said something about "not enough memory". That might be related to my dom0 in test environment only having 4GB of RAM. Maybe not related to VHD to QCOW2 migration path. I never saw this error in my live environment where all node's dom0 have 8GB RAM.

          Update candidate looks good so far from my point of view.

          1 Reply Last reply Reply Quote 1
          • rzrR Offline
            rzr Vates 🪐 XCP-ng Team
            last edited by stormi

            New update candidates for you to test!

            We are continuing to refine the next batch of update with planned fixes. This release batch contains fixes on the major storage feature previously announced, read the RC2 announcement for QCOW2 image format support for 2TiB+ images.

            What changed

            Storage

            QCOW2 image format support is the major feature of this release batch, check related announcement in forum.

            Some fixes have been applied to fix issues found during the testing phase.

            • sm: 3.2.12-17.6

              • Limit QCOW2 VDI max size to be 16TiB with metadata to allow compatibility with EXTSR (EXTSR is limited to 16TiB unique file size)

                • If a full QCOW2 VDI is allocated, XCP-ng would not be able to migrate it to an EXTSR with this limitation.

                • In the future, while EXTSR will remain limited to this maximum size, other SR types will evolve towards higher limits. For this, we'll have to work on the existing assumption that all SR which support the QCOW2 image-format share the same maximum size limit for VDIs, and to catch migration attempts towards SRs whic cannot receive disks bigger than their maximum limit.

            • blktap: 3.55.5-6.6

              • Update the package's license.

            Versions:

            • blktap: 3.55.5-6.5.xcpng8.3 -> 3.55.5-6.6.xcpng8.3
            • sm: 3.2.12-17.5.xcpng8.3 -> 3.2.12-17.6.xcpng8.3

            Test on XCP-ng 8.3

            If you are using XOSTOR, please refer to our documentation for the update method.

            yum clean metadata --enablerepo=xcp-ng-testing,xcp-ng-candidates
            yum update --enablerepo=xcp-ng-testing,xcp-ng-candidates
            reboot
            

            The usual update rules apply: pool coordinator first, etc.

            What to test

            The most important change is related to storage: adding QCOW2 support also affects the codebase managing VHD disks. What matters here is, above all, to detect any regression on VHD support (we tested it deeply, but on this matter there's no such thing as too much testing). Of course, you are also welcome to test the QCOW2 image format support.

            See the dedicated thread for more information.

            And, as usual, normal use and anything else you want to test.

            Test window before official release of the updates

            ~3 days

            We would like to thank users who reported feedback since our last call for testing, in less than 24h: @acebmxer, @Andrew, @MajorP93.

            F A A 4 Replies Last reply Reply Quote 0
            • F Offline
              flakpyro @rzr
              last edited by flakpyro

              Installed on my usual test hosts without issues. However i am not using the qcow2 disk format anywhere yet

              1 Reply Last reply Reply Quote 0
              • A Offline
                acebmxer @rzr
                last edited by

                @rzr Installed latest update and no issues to report. I dont hvae any 2tb+ drives in my vms. converting from vhd to qcow2 and backups all working.

                1 Reply Last reply Reply Quote 0
                • A Offline
                  Andrew Top contributor @rzr
                  last edited by

                  @rzr Updates running. VHD use only.

                  1 Reply Last reply Reply Quote 0
                  • B Offline
                    bufanda
                    last edited by

                    Installed on my usual lab setup works fine so far although I have the "Fell back to full backup" message for qcow2 VMs

                    A 1 Reply Last reply Reply Quote 0
                    • A Offline
                      acebmxer @bufanda
                      last edited by

                      @bufanda Make sure you have enable the purge snapshot when using CBT, if you are using CBT. After the previous update that fixed fell back to full backup issue for me..

                      Maybe also try running a manual full backup to reset the VDI chain.

                      Screenshot_20260425_071444.png

                      1 Reply Last reply Reply Quote 0
                      • A Offline
                        acebmxer @rzr
                        last edited by acebmxer

                        @rzr

                        One thing i am noticing after this updates. Is i am seeing alot more traffic on my truenas. The gaps are me shutting down vms and starting each one to find the problem vms but it seems to be any vm. I didnt notice any performance issues just noticed the graph in truenas when its usally flat line there the occasional spike here and there not the big mess to the left in first screenshot. 5 vms running

                        My xoa is on local storage on master host. and used for these test.
                        All vms are powered off except xoa and xo

                        Screenshot_20260425_130107.png

                        Here i booted the xo vm and left it idle. The spike after is me live migrate back to vhd SR and then left idle.

                        Screenshot_20260425_135258-1.png

                        The gap in the middle is xo idle on vhd only SR.
                        Screenshot_20260425_135604.png

                        live Migrate xo back to qcow2 only SR
                        Screenshot_20260425_140908.png

                        Migration back to qcow2 completed
                        Screenshot_20260425_143122.png

                        Left xo ldle after migration to qcow2 sr.
                        Screenshot_20260425_144314.png

                        Again all vms booted and idle....

                        Screenshot_20260425_145451.png

                        From master host.
                        Screenshot_20260425_150320.png

                        1 Reply Last reply Reply Quote 0

                        Hello! It looks like you're interested in this conversation, but you don't have an account yet.

                        Getting fed up of having to scroll through the same posts each visit? When you register for an account, you'll always come back to exactly where you were before, and choose to be notified of new replies (either via email, or push notification). You'll also be able to save bookmarks and upvote posts to show your appreciation to other community members.

                        With your input, this post could be even better 💗

                        Register Login
                        • First post
                          Last post