XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    XCP-ng 8.3 updates announcements and testing

    Scheduled Pinned Locked Moved News
    492 Posts 48 Posters 205.4k Views 67 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • M Offline
      manilx @MajorP93
      last edited by

      @MajorP93 As you replied to my message and not to stormi.... Just wanted to clear up any misunderstanding.

      M 1 Reply Last reply Reply Quote 0
      • M Online
        MajorP93 @manilx
        last edited by MajorP93

        @manilx No I did not. I replied to this thread in general not to your post. When looking at the answers to your specific post where you reported the RPU issue I can only see 1 reply which was written by Stormi (asking for logs).

        Going forward: you can identify replies to your posts by either looking at posts that quoted your message or tagged your username using the @ character.

        1 Reply Last reply Reply Quote 0
        • stormiS Offline
          stormi Vates 🪐 XCP-ng Team
          last edited by

          Yes, new update candidates, again!

          This is, in theory, the very last round of fixes before QCOW2 support comes as an official update!

          What changed

          Storage

          • sm + blktap:
            • Fix a never ending coalesce task and an associated tapdisk crash which would leave the QCOW2 VDI corrupted. Thanks @emerson for reporting the issue!
            • Attempting to migrate a QCOW2 towards a SR that supports QCOW2 but prefers VHD will now automatically create a QCOW2 disk at the destination if the disk is bigger than 2 TiB. Previously, it was documented as a known issue that it would attempt to create a VHD and fail.
            • Another known issue fixed: attempting to resize a QCOW2 VDI with a snapshot on a LVM-based SR no longer fails.
          • xapi:
            • prevent long migrations for failing due to expiring XAPI session.
            • More security fixes related to XSA-489.

          Versions:

          • blktap: 3.55.5-6.6.xcpng8.3 -> 3.55.5-6.7.xcpng8.3
          • sm: 3.2.12-17.6.xcpng8.3 -> 3.2.12-17.7.xcpng8.3
          • xapi: 26.1.3-1.9.xcpng8.3 -> 26.1.3-1.10.xcpng8.3

          Test on XCP-ng 8.3

          If you are using XOSTOR, please refer to our documentation for the update method.

          yum clean metadata --enablerepo=xcp-ng-testing,xcp-ng-candidates
          yum update --enablerepo=xcp-ng-testing,xcp-ng-candidates
          reboot
          

          The usual update rules apply: pool coordinator first, etc.

          What to test

          Anything related to storage, be it with VHD or QCOW2 disks.

          Test window before official release of the updates

          ~3 days

          F B 2 Replies Last reply Reply Quote 1
          • F Offline
            flakpyro @stormi
            last edited by

            @stormi Updates deployed, no issues so far.

            1 Reply Last reply Reply Quote 2
            • I Offline
              IgorGlock
              last edited by

              I could reproduce a bug: cloning a VDI (fast and slow) from SR-1 to SR-2 does not work.
              I got a pool, current qcow2 (xcp-ng-testing,xcp-ng-candidates), a template (vm), two SR attached via FC HBA, everything else works fine.

              	
              id	"0moocnjj4-jmge69z174"
              properties	
              method	"vm.create"
              params	
              acls	[]
              clone	false
              existingDisks	
              0	
              name_label	"xcpwin-vm-200.lab.testing.company.net C"
              name_description	"Created by XO"
              size	64424509440
              $SR	"e69a6183-dc17-5b0e-0794-50ee8f57c521"
              name_label	"xcpwin-vm-200.lab.testing.company.net"
              template	"671e59e0-f59d-fd96-7426-9886ad9b8ee3"
              VDIs	[]
              VIFs	
              0	
              network	"261a821c-a373-ba36-2c44-487ed0e1a202"
              allowedIpv4Addresses	[]
              allowedIpv6Addresses	[]
              CPUs	4
              cpusMax	4
              cpuWeight	null
              cpuCap	null
              name_description	"Test VM"
              memory	4294967296
              bootAfterCreate	true
              copyHostBiosStrings	false
              createVtpm	false
              destroyCloudConfigVdiAfterBoot	false
              secureBoot	false
              share	false
              coreOs	false
              tags	[]
              hvmBootFirmware	"uefi"
              name	"API call: vm.create"
              userId	"5f5f1e9c-c928-4898-ab88-62b201698476"
              type	"api.call"
              start	1777726828192
              status	"failure"
              updatedAt	1777727473136
              end	1777727473136
              result	
              code	"INTERNAL_ERROR"
              params	
              0	"Storage_error ([S(Internal_error);S(Storage_error ([S(Migration_preparation_failure);S(Storage_error ([S(Backend_error);[S(SR_BACKEND_FAILURE_1200);[S();S(list index out of range);S()]]]))]))])"
              task	
              uuid	"cc16d4ba-ce10-58ef-10c0-808872ec30ca"
              name_label	"Async.VDI.pool_migrate"
              name_description	""
              allowed_operations	[]
              current_operations	{}
              created	"20260502T13:08:44Z"
              finished	"20260502T13:10:25Z"
              status	"failure"
              resident_on	"OpaqueRef:6cac1d11-e91e-5288-8ec8-3d84e6288952"
              progress	1
              type	"<none/>"
              result	""
              error_info	
              0	"INTERNAL_ERROR"
              1	"Storage_error ([S(Internal_error);S(Storage_error ([S(Migration_preparation_failure);S(Storage_error ([S(Backend_error);[S(SR_BACKEND_FAILURE_1200);[S();S(list index out of range);S()]]]))]))])"
              other_config	{}
              subtask_of	"OpaqueRef:NULL"
              subtasks	[]
              backtrace	"(((process xapi)(filename ocaml/xapi/xapi_vm_migrate.ml)(line 1815))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 24))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 39))((process xapi)(filename ocaml/xapi/xapi_vm_migrate.ml)(line 2105))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 24))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 39))((process xapi)(filename ocaml/xapi/xapi_vm_migrate.ml)(line 2095))((process xapi)(filename ocaml/xapi/message_forwarding.ml)(line 141))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 24))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 39))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 24))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 39))((process xapi)(filename ocaml/xapi/rbac.ml)(line 228))((process xapi)(filename ocaml/xapi/rbac.ml)(line 238))((process xapi)(filename ocaml/xapi/server_helpers.ml)(line 78)))"
              message	"INTERNAL_ERROR(Storage_error ([S(Internal_error);S(Storage_error ([S(Migration_preparation_failure);S(Storage_error ([S(Backend_error);[S(SR_BACKEND_FAILURE_1200);[S();S(list index out of range);S()]]]))]))]))"
              name	"XapiError"
              stack	"XapiError: INTERNAL_ERROR(Storage_error ([S(Internal_error);S(Storage_error ([S(Migration_preparation_failure);S(Storage_error ([S(Backend_error);[S(SR_BACKEND_FAILURE_1200);[S();S(list index out of range);S()]]]))]))]))\n    at Function.wrap (file:///usr/local/lib/node_modules/xo-server/node_modules/xen-api/_XapiError.mjs:16:12)\n    at default (file:///usr/local/lib/node_modules/xo-server/node_modules/xen-api/_getTaskResult.mjs:13:29)\n    at Xapi._addRecordToCache (file:///usr/local/lib/node_modules/xo-server/node_modules/xen-api/index.mjs:1078:24)\n    at file:///usr/local/lib/node_modules/xo-server/node_modules/xen-api/index.mjs:1112:14\n    at Array.forEach (<anonymous>)\n    at Xapi._processEvents (file:///usr/local/lib/node_modules/xo-server/node_modules/xen-api/index.mjs:1102:12)\n    at Xapi._watchEvents (file:///usr/local/lib/node_modules/xo-server/node_modules/xen-api/index.mjs:1275:14)"
              
              DanpD dthenotD 2 Replies Last reply Reply Quote 1
              • DanpD Offline
                Danp Pro Support Team @IgorGlock
                last edited by

                @IgorGlock Thanks for the report. Can you elaborate on the exact steps required to reproduce this? Also, are you running XOA or XO from sources? Which version or commit?

                I 1 Reply Last reply Reply Quote 1
                • I Offline
                  IgorGlock @Danp
                  last edited by

                  @Danp
                  I got XOA appliance with trial license for our lab.
                  Release channels: latest
                  Current version: 6.4.0

                  I removed the 2 SR and created them new, use qcow2 again. VM migration from SR-1 to SR-2 failed too.

                  	
                  id	"0mopksrzz-rnu1secaodl"
                  properties	
                  method	"vm.migrate"
                  params	
                  vm	"64b89180-1f63-3625-4f92-efb5cc15b9d6"
                  sr	"81a41e32-49e5-0f5f-7692-2fc49ec875a2"
                  targetHost	"253ea664-09bf-4212-a3b8-f1588406190e"
                  name	"API call: vm.migrate"
                  userId	"5f5f1e9c-c928-4898-ab88-62b201698476"
                  type	"api.call"
                  start	1777800975551
                  status	"failure"
                  updatedAt	1777801141331
                  infos	
                  0	
                  message	"migration with storage motion"
                  warnings	
                  0	
                  data	
                  host	"253ea664-09bf-4212-a3b8-f1588406190e"
                  sr	"81a41e32-49e5-0f5f-7692-2fc49ec875a2"
                  message	"host have no PBD in the SR"
                  1	
                  data	
                  host	"253ea664-09bf-4212-a3b8-f1588406190e"
                  sr	"81a41e32-49e5-0f5f-7692-2fc49ec875a2"
                  message	"host have no PBD in the SR"
                  end	1777801141331
                  result	
                  code	"INTERNAL_ERROR"
                  params	
                  0	"Storage_error ([S(Internal_error);S(Storage_error ([S(Migration_preparation_failure);S(Storage_error ([S(Backend_error);[S(SR_BACKEND_FAILURE_1200);[S();S(list index out of range);S()]]]))]))])"
                  task	
                  uuid	"af35352c-cdbc-0567-8ccc-167d636bff11"
                  name_label	"Async.VM.migrate_send"
                  name_description	""
                  allowed_operations	[]
                  current_operations	{}
                  created	"20260503T09:36:15Z"
                  finished	"20260503T09:39:01Z"
                  status	"failure"
                  resident_on	"OpaqueRef:0d7b2e70-c99b-cf1c-7a54-c98d1916af44"
                  progress	1
                  type	"<none/>"
                  result	""
                  error_info	
                  0	"INTERNAL_ERROR"
                  1	"Storage_error ([S(Internal_error);S(Storage_error ([S(Migration_preparation_failure);S(Storage_error ([S(Backend_error);[S(SR_BACKEND_FAILURE_1200);[S();S(list index out of range);S()]]]))]))])"
                  other_config	{}
                  subtask_of	"OpaqueRef:NULL"
                  subtasks	[]
                  backtrace	"(((process xapi)(filename ocaml/xapi/xapi_vm_migrate.ml)(line 1815))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 24))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 39))((process xapi)(filename ocaml/xapi/message_forwarding.ml)(line 141))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 24))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 39))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 24))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 39))((process xapi)(filename ocaml/xapi/message_forwarding.ml)(line 2650))((process xapi)(filename ocaml/xapi/rbac.ml)(line 228))((process xapi)(filename ocaml/xapi/rbac.ml)(line 238))((process xapi)(filename ocaml/xapi/server_helpers.ml)(line 78)))"
                  message	"INTERNAL_ERROR(Storage_error ([S(Internal_error);S(Storage_error ([S(Migration_preparation_failure);S(Storage_error ([S(Backend_error);[S(SR_BACKEND_FAILURE_1200);[S();S(list index out of range);S()]]]))]))]))"
                  name	"XapiError"
                  stack	"XapiError: INTERNAL_ERROR(Storage_error ([S(Internal_error);S(Storage_error ([S(Migration_preparation_failure);S(Storage_error ([S(Backend_error);[S(SR_BACKEND_FAILURE_1200);[S();S(list index out of range);S()]]]))]))]))\n    at Function.wrap (file:///usr/local/lib/node_modules/xo-server/node_modules/xen-api/_XapiError.mjs:16:12)\n    at default (file:///usr/local/lib/node_modules/xo-server/node_modules/xen-api/_getTaskResult.mjs:13:29)\n    at Xapi._addRecordToCache (file:///usr/local/lib/node_modules/xo-server/node_modules/xen-api/index.mjs:1078:24)\n    at file:///usr/local/lib/node_modules/xo-server/node_modules/xen-api/index.mjs:1112:14\n    at Array.forEach (<anonymous>)\n    at Xapi._processEvents (file:///usr/local/lib/node_modules/xo-server/node_modules/xen-api/index.mjs:1102:12)\n    at Xapi._watchEvents (file:///usr/local/lib/node_modules/xo-server/node_modules/xen-api/index.mjs:1275:14)"
                  
                  DanpD 1 Reply Last reply Reply Quote 1
                  • B Offline
                    bufanda @stormi
                    last edited by

                    @stormi installed updates. Did some backups and VM migrations between various storages (bad and qcow2) looks good so far.

                    1 Reply Last reply Reply Quote 1
                    • DanpD Offline
                      Danp Pro Support Team @IgorGlock
                      last edited by

                      @IgorGlock Are you able to open the support tunnel for us to investigate? You can either post the tunnel ID here or open a support ticket.

                      1 Reply Last reply Reply Quote 0
                      • dthenotD Online
                        dthenot Vates 🪐 XCP-ng Team @IgorGlock
                        last edited by

                        @IgorGlock Hello,

                        Could you share the exception that should be in /var/log/SMlog?

                        1 Reply Last reply Reply Quote 0

                        Hello! It looks like you're interested in this conversation, but you don't have an account yet.

                        Getting fed up of having to scroll through the same posts each visit? When you register for an account, you'll always come back to exactly where you were before, and choose to be notified of new replies (either via email, or push notification). You'll also be able to save bookmarks and upvote posts to show your appreciation to other community members.

                        With your input, this post could be even better 💗

                        Register Login
                        • First post
                          Last post