XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    XCP-ng 8.3 updates announcements and testing

    Scheduled Pinned Locked Moved News
    488 Posts 48 Posters 203.5k Views 65 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • M Offline
      manilx @stormi
      last edited by manilx

      @stormi Did not check logs but if you tell me what to look for I can.
      In config-logs on XOA there is nothing relevant.

      M 1 Reply Last reply Reply Quote 0
      • M Offline
        MajorP93 @manilx
        last edited by MajorP93

        I updated both my test and production XCP-ng environments.

        No issues during updates on all 6 hosts.

        M 1 Reply Last reply Reply Quote 2
        • M Offline
          manilx @MajorP93
          last edited by

          @MajorP93 Pls note that the updates had no issues. Just the RPU did not complete. 2 different things.....

          M 1 Reply Last reply Reply Quote 0
          • M Offline
            MajorP93 @manilx
            last edited by

            @manilx It's absolutely clear to me that the method of installing updates and the changes provided by the updated packages are 2 different things.
            I was not commenting on your message but rather sharing my own experience with this round of patches as this thread is generally related to XCP-ng patches.

            M 1 Reply Last reply Reply Quote 0
            • M Offline
              manilx @MajorP93
              last edited by

              @MajorP93 As you replied to my message and not to stormi.... Just wanted to clear up any misunderstanding.

              M 1 Reply Last reply Reply Quote 0
              • M Offline
                MajorP93 @manilx
                last edited by MajorP93

                @manilx No I did not. I replied to this thread in general not to your post. When looking at the answers to your specific post where you reported the RPU issue I can only see 1 reply which was written by Stormi (asking for logs).

                Going forward: you can identify replies to your posts by either looking at posts that quoted your message or tagged your username using the @ character.

                1 Reply Last reply Reply Quote 0
                • stormiS Offline
                  stormi Vates 🪐 XCP-ng Team
                  last edited by

                  Yes, new update candidates, again!

                  This is, in theory, the very last round of fixes before QCOW2 support comes as an official update!

                  What changed

                  Storage

                  • sm + blktap:
                    • Fix a never ending coalesce task and an associated tapdisk crash which would leave the QCOW2 VDI corrupted. Thanks @emerson for reporting the issue!
                    • Attempting to migrate a QCOW2 towards a SR that supports QCOW2 but prefers VHD will now automatically create a QCOW2 disk at the destination if the disk is bigger than 2 TiB. Previously, it was documented as a known issue that it would attempt to create a VHD and fail.
                    • Another known issue fixed: attempting to resize a QCOW2 VDI with a snapshot on a LVM-based SR no longer fails.
                  • xapi:
                    • prevent long migrations for failing due to expiring XAPI session.
                    • More security fixes related to XSA-489.

                  Versions:

                  • blktap: 3.55.5-6.6.xcpng8.3 -> 3.55.5-6.7.xcpng8.3
                  • sm: 3.2.12-17.6.xcpng8.3 -> 3.2.12-17.7.xcpng8.3
                  • xapi: 26.1.3-1.9.xcpng8.3 -> 26.1.3-1.10.xcpng8.3

                  Test on XCP-ng 8.3

                  If you are using XOSTOR, please refer to our documentation for the update method.

                  yum clean metadata --enablerepo=xcp-ng-testing,xcp-ng-candidates
                  yum update --enablerepo=xcp-ng-testing,xcp-ng-candidates
                  reboot
                  

                  The usual update rules apply: pool coordinator first, etc.

                  What to test

                  Anything related to storage, be it with VHD or QCOW2 disks.

                  Test window before official release of the updates

                  ~3 days

                  F 1 Reply Last reply Reply Quote 0
                  • F Offline
                    flakpyro @stormi
                    last edited by

                    @stormi Updates deployed, no issues so far.

                    1 Reply Last reply Reply Quote 1
                    • I Offline
                      IgorGlock
                      last edited by

                      I could reproduce a bug: cloning a VDI (fast and slow) from SR-1 to SR-2 does not work.
                      I got a pool, current qcow2 (xcp-ng-testing,xcp-ng-candidates), a template (vm), two SR attached via FC HBA, everything else works fine.

                      	
                      id	"0moocnjj4-jmge69z174"
                      properties	
                      method	"vm.create"
                      params	
                      acls	[]
                      clone	false
                      existingDisks	
                      0	
                      name_label	"xcpwin-vm-200.lab.testing.company.net C"
                      name_description	"Created by XO"
                      size	64424509440
                      $SR	"e69a6183-dc17-5b0e-0794-50ee8f57c521"
                      name_label	"xcpwin-vm-200.lab.testing.company.net"
                      template	"671e59e0-f59d-fd96-7426-9886ad9b8ee3"
                      VDIs	[]
                      VIFs	
                      0	
                      network	"261a821c-a373-ba36-2c44-487ed0e1a202"
                      allowedIpv4Addresses	[]
                      allowedIpv6Addresses	[]
                      CPUs	4
                      cpusMax	4
                      cpuWeight	null
                      cpuCap	null
                      name_description	"Test VM"
                      memory	4294967296
                      bootAfterCreate	true
                      copyHostBiosStrings	false
                      createVtpm	false
                      destroyCloudConfigVdiAfterBoot	false
                      secureBoot	false
                      share	false
                      coreOs	false
                      tags	[]
                      hvmBootFirmware	"uefi"
                      name	"API call: vm.create"
                      userId	"5f5f1e9c-c928-4898-ab88-62b201698476"
                      type	"api.call"
                      start	1777726828192
                      status	"failure"
                      updatedAt	1777727473136
                      end	1777727473136
                      result	
                      code	"INTERNAL_ERROR"
                      params	
                      0	"Storage_error ([S(Internal_error);S(Storage_error ([S(Migration_preparation_failure);S(Storage_error ([S(Backend_error);[S(SR_BACKEND_FAILURE_1200);[S();S(list index out of range);S()]]]))]))])"
                      task	
                      uuid	"cc16d4ba-ce10-58ef-10c0-808872ec30ca"
                      name_label	"Async.VDI.pool_migrate"
                      name_description	""
                      allowed_operations	[]
                      current_operations	{}
                      created	"20260502T13:08:44Z"
                      finished	"20260502T13:10:25Z"
                      status	"failure"
                      resident_on	"OpaqueRef:6cac1d11-e91e-5288-8ec8-3d84e6288952"
                      progress	1
                      type	"<none/>"
                      result	""
                      error_info	
                      0	"INTERNAL_ERROR"
                      1	"Storage_error ([S(Internal_error);S(Storage_error ([S(Migration_preparation_failure);S(Storage_error ([S(Backend_error);[S(SR_BACKEND_FAILURE_1200);[S();S(list index out of range);S()]]]))]))])"
                      other_config	{}
                      subtask_of	"OpaqueRef:NULL"
                      subtasks	[]
                      backtrace	"(((process xapi)(filename ocaml/xapi/xapi_vm_migrate.ml)(line 1815))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 24))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 39))((process xapi)(filename ocaml/xapi/xapi_vm_migrate.ml)(line 2105))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 24))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 39))((process xapi)(filename ocaml/xapi/xapi_vm_migrate.ml)(line 2095))((process xapi)(filename ocaml/xapi/message_forwarding.ml)(line 141))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 24))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 39))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 24))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 39))((process xapi)(filename ocaml/xapi/rbac.ml)(line 228))((process xapi)(filename ocaml/xapi/rbac.ml)(line 238))((process xapi)(filename ocaml/xapi/server_helpers.ml)(line 78)))"
                      message	"INTERNAL_ERROR(Storage_error ([S(Internal_error);S(Storage_error ([S(Migration_preparation_failure);S(Storage_error ([S(Backend_error);[S(SR_BACKEND_FAILURE_1200);[S();S(list index out of range);S()]]]))]))]))"
                      name	"XapiError"
                      stack	"XapiError: INTERNAL_ERROR(Storage_error ([S(Internal_error);S(Storage_error ([S(Migration_preparation_failure);S(Storage_error ([S(Backend_error);[S(SR_BACKEND_FAILURE_1200);[S();S(list index out of range);S()]]]))]))]))\n    at Function.wrap (file:///usr/local/lib/node_modules/xo-server/node_modules/xen-api/_XapiError.mjs:16:12)\n    at default (file:///usr/local/lib/node_modules/xo-server/node_modules/xen-api/_getTaskResult.mjs:13:29)\n    at Xapi._addRecordToCache (file:///usr/local/lib/node_modules/xo-server/node_modules/xen-api/index.mjs:1078:24)\n    at file:///usr/local/lib/node_modules/xo-server/node_modules/xen-api/index.mjs:1112:14\n    at Array.forEach (<anonymous>)\n    at Xapi._processEvents (file:///usr/local/lib/node_modules/xo-server/node_modules/xen-api/index.mjs:1102:12)\n    at Xapi._watchEvents (file:///usr/local/lib/node_modules/xo-server/node_modules/xen-api/index.mjs:1275:14)"
                      
                      DanpD 1 Reply Last reply Reply Quote 0
                      • DanpD Offline
                        Danp Pro Support Team @IgorGlock
                        last edited by

                        @IgorGlock Thanks for the report. Can you elaborate on the exact steps required to reproduce this? Also, are you running XOA or XO from sources? Which version or commit?

                        1 Reply Last reply Reply Quote 0

                        Hello! It looks like you're interested in this conversation, but you don't have an account yet.

                        Getting fed up of having to scroll through the same posts each visit? When you register for an account, you'll always come back to exactly where you were before, and choose to be notified of new replies (either via email, or push notification). You'll also be able to save bookmarks and upvote posts to show your appreciation to other community members.

                        With your input, this post could be even better 💗

                        Register Login
                        • First post
                          Last post