XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    XCP-ng 8.3 updates announcements and testing

    Scheduled Pinned Locked Moved News
    509 Posts 50 Posters 219.2k Views 70 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • M Offline
      manilx @marcoi
      last edited by

      @marcoi I've read in the blog post of Vates that no.
      Did it anyway.

      1 Reply Last reply Reply Quote 2
      • stormiS Offline
        stormi Vates 🪐 XCP-ng Team
        last edited by

        Indeed, no reboot required if those are the only patches that you are applying, as indicated in the blog post.

        marcoiM 1 Reply Last reply Reply Quote 1
        • marcoiM Offline
          marcoi @stormi
          last edited by

          @stormi I getting a response to reboot after applying to master.
          43aa846c-264e-4663-8624-4ce9908301ee-image.jpeg

          1 Reply Last reply Reply Quote 0
          • olivierlambertO Online
            olivierlambert Vates 🪐 Co-Founder CEO
            last edited by

            XO isn't subtle and will ask for reboot regardless of the type of update (because for now there's no way to know if reboot is required)

            marcoiM A 2 Replies Last reply Reply Quote 0
            • marcoiM Offline
              marcoi @olivierlambert
              last edited by

              @olivierlambert lol okay got it 🙂

              1 Reply Last reply Reply Quote 0
              • I Offline
                IgorGlock
                last edited by

                Hello @dthenot
                hello @danp

                I got the results. You can access our lab, feel free, nothing is productive. Support ID: 40797

                High count of VDI / VMs on single SR make entire Storage slow.
                e36db14a-ddd3-4cac-b768-b7bc53b7278c-image.jpeg

                Migration fails because of:
                [21:52 xcp-ng-1 ~]# xe sr-scan uuid=43dbbe8e-039a-e66c-f6cb-d88de4f4d962
                Error code: SR_BACKEND_FAILURE_1200
                Error parameters: , list index out of range,

                59 Lines of log:

                May  8 21:40:34 xcp-ng-2 SM: [56171][MainThread] ['/sbin/lvchange', '-an', '/dev/VG_XenStorage-43dbbe8e-039a-e66c-f6cb-d88de4f4d962/QCOW2-ffcf4c40-0d81-4da7-9908-478629c88358']
                May  8 21:40:34 xcp-ng-2 SM: [56171][MainThread]   pread SUCCESS
                May  8 21:40:34 xcp-ng-2 fairlock[8031]: /run/fairlock/devicemapper released
                May  8 21:40:34 xcp-ng-2 fairlock[8031]: /run/fairlock/devicemapper acquired
                May  8 21:40:34 xcp-ng-2 fairlock[8031]: /run/fairlock/devicemapper sent '56171 - 832.417448266'
                May  8 21:40:34 xcp-ng-2 SM: [56171][MainThread] ['/sbin/dmsetup', 'status', 'VG_XenStorage--43dbbe8e--039a--e66c--f6cb--d88de4f4d962-QCOW2--ffcf4c40--0d81--4da7--9908--478629c88358']
                May  8 21:40:34 xcp-ng-2 SM: [56171][MainThread]   pread SUCCESS
                May  8 21:40:34 xcp-ng-2 fairlock[8031]: /run/fairlock/devicemapper released
                May  8 21:40:34 xcp-ng-2 SM: [56171][MainThread] lock: released /var/lock/sm/lvm-43dbbe8e-039a-e66c-f6cb-d88de4f4d962/ffcf4c40-0d81-4da7-9908-478629c88358
                May  8 21:40:34 xcp-ng-2 fairlock[8031]: /run/fairlock/devicemapper acquired
                May  8 21:40:34 xcp-ng-2 fairlock[8031]: /run/fairlock/devicemapper sent '56171 - 832.433077125'
                May  8 21:40:34 xcp-ng-2 SM: [56171][MainThread] ['/sbin/vgs', '--noheadings', '--nosuffix', '--units', 'b', 'VG_XenStorage-43dbbe8e-039a-e66c-f6cb-d88de4f4d962']
                May  8 21:40:34 xcp-ng-2 SM: [56171][MainThread]   pread SUCCESS
                May  8 21:40:34 xcp-ng-2 fairlock[8031]: /run/fairlock/devicemapper released
                May  8 21:40:34 xcp-ng-2 fairlock[8031]: /run/fairlock/devicemapper acquired
                May  8 21:40:34 xcp-ng-2 fairlock[8031]: /run/fairlock/devicemapper sent '56171 - 832.64258113'
                May  8 21:40:34 xcp-ng-2 SM: [56171][MainThread] ['/sbin/vgs', '--readonly', 'VG_XenStorage-43dbbe8e-039a-e66c-f6cb-d88de4f4d962']
                May  8 21:40:34 xcp-ng-2 SM: [56171][MainThread]   pread SUCCESS
                May  8 21:40:34 xcp-ng-2 fairlock[8031]: /run/fairlock/devicemapper released
                May  8 21:40:34 xcp-ng-2 SM: [56171][MainThread] lock: released /var/lock/sm/43dbbe8e-039a-e66c-f6cb-d88de4f4d962/sr
                May  8 21:40:34 xcp-ng-2 SM: [56171][MainThread] ***** generic exception: sr_scan: EXCEPTION <class 'IndexError'>, list index out of range
                May  8 21:40:34 xcp-ng-2 SM: [56171][MainThread]   File "/opt/xensource/sm/SRCommand.py", line 113, in run
                May  8 21:40:34 xcp-ng-2 SM: [56171][MainThread]     return self._run_locked(sr)
                May  8 21:40:34 xcp-ng-2 SM: [56171][MainThread]   File "/opt/xensource/sm/SRCommand.py", line 163, in _run_locked
                May  8 21:40:34 xcp-ng-2 SM: [56171][MainThread]     rv = self._run(sr, target)
                May  8 21:40:34 xcp-ng-2 SM: [56171][MainThread]   File "/opt/xensource/sm/SRCommand.py", line 377, in _run
                May  8 21:40:34 xcp-ng-2 SM: [56171][MainThread]     return sr.scan(self.params['sr_uuid'])
                May  8 21:40:34 xcp-ng-2 SM: [56171][MainThread]   File "/opt/xensource/sm/LVMoHBASR", line 163, in scan
                May  8 21:40:34 xcp-ng-2 SM: [56171][MainThread]     LVMSR.LVMSR.scan(self, sr_uuid)
                May  8 21:40:34 xcp-ng-2 SM: [56171][MainThread]   File "/opt/xensource/sm/LVMSR.py", line 822, in scan
                May  8 21:40:34 xcp-ng-2 SM: [56171][MainThread]     new_vdi = self.vdi(cbt_uuid)
                May  8 21:40:34 xcp-ng-2 SM: [56171][MainThread]   File "/opt/xensource/sm/LVMoHBASR", line 230, in vdi
                May  8 21:40:34 xcp-ng-2 SM: [56171][MainThread]     return LVMoHBAVDI(self, uuid)
                May  8 21:40:34 xcp-ng-2 SM: [56171][MainThread]   File "/opt/xensource/sm/VDI.py", line 100, in __init__
                May  8 21:40:34 xcp-ng-2 SM: [56171][MainThread]     self.load(uuid)
                May  8 21:40:34 xcp-ng-2 SM: [56171][MainThread]   File "/opt/xensource/sm/LVMSR.py", line 1375, in load
                May  8 21:40:34 xcp-ng-2 SM: [56171][MainThread]     size = int(self.sr.srcmd.params['args'][0])
                May  8 21:40:34 xcp-ng-2 SM: [56171][MainThread]
                May  8 21:40:34 xcp-ng-2 SM: [56171][MainThread] ***** LVM over FC: EXCEPTION <class 'IndexError'>, list index out of range
                May  8 21:40:34 xcp-ng-2 SM: [56171][MainThread]   File "/opt/xensource/sm/SRCommand.py", line 392, in run
                May  8 21:40:34 xcp-ng-2 SM: [56171][MainThread]     ret = cmd.run(sr)
                May  8 21:40:34 xcp-ng-2 SM: [56171][MainThread]   File "/opt/xensource/sm/SRCommand.py", line 113, in run
                May  8 21:40:34 xcp-ng-2 SM: [56171][MainThread]     return self._run_locked(sr)
                May  8 21:40:34 xcp-ng-2 SM: [56171][MainThread]   File "/opt/xensource/sm/SRCommand.py", line 163, in _run_locked
                May  8 21:40:34 xcp-ng-2 SM: [56171][MainThread]     rv = self._run(sr, target)
                May  8 21:40:34 xcp-ng-2 SM: [56171][MainThread]   File "/opt/xensource/sm/SRCommand.py", line 377, in _run
                May  8 21:40:34 xcp-ng-2 SM: [56171][MainThread]     return sr.scan(self.params['sr_uuid'])
                May  8 21:40:34 xcp-ng-2 SM: [56171][MainThread]   File "/opt/xensource/sm/LVMoHBASR", line 163, in scan
                May  8 21:40:34 xcp-ng-2 SM: [56171][MainThread]     LVMSR.LVMSR.scan(self, sr_uuid)
                May  8 21:40:34 xcp-ng-2 SM: [56171][MainThread]   File "/opt/xensource/sm/LVMSR.py", line 822, in scan
                May  8 21:40:34 xcp-ng-2 SM: [56171][MainThread]     new_vdi = self.vdi(cbt_uuid)
                May  8 21:40:34 xcp-ng-2 SM: [56171][MainThread]   File "/opt/xensource/sm/LVMoHBASR", line 230, in vdi
                May  8 21:40:34 xcp-ng-2 SM: [56171][MainThread]     return LVMoHBAVDI(self, uuid)
                May  8 21:40:34 xcp-ng-2 SM: [56171][MainThread]   File "/opt/xensource/sm/VDI.py", line 100, in __init__
                May  8 21:40:34 xcp-ng-2 SM: [56171][MainThread]     self.load(uuid)
                May  8 21:40:34 xcp-ng-2 SM: [56171][MainThread]   File "/opt/xensource/sm/LVMSR.py", line 1375, in load
                May  8 21:40:34 xcp-ng-2 SM: [56171][MainThread]     size = int(self.sr.srcmd.params['args'][0])
                May  8 21:40:34 xcp-ng-2 SM: [56171][MainThread]
                May  8 21:40:34 xcp-ng-2 SM: [56171][MainThread] lock: closed /var/lock/sm/lvm-43dbbe8e-039a-e66c-f6cb-d88de4f4d962/030878c2-49fd-4b0f-a971-9dfa121249a3
                
                

                Best regards

                Igor

                A 1 Reply Last reply Reply Quote 0
                • A Online
                  Andrew Top contributor @olivierlambert
                  last edited by

                  @olivierlambert Seeing some failure/errors on CR jobs. It leaves VDIs attached to Control Domain... Next run it normally works. I have not seen this error until after the current sm update. Running XO (commit 7e144).

                                  "message": "INTERNAL_ERROR(Storage_error ([S(Illegal_transition); [[S(Activated);S(RO)];[S(Activated);S(RW)]]]))",
                                  "name": "XapiError",
                                  "stack": "XapiError: INTERNAL_ERROR(Storage_error ([S(Illegal_transition);[[S(Activated);S(RO)];[S(Activated);S(RW)]]]))\n    at XapiError.wrap (file:///opt/xo/xo-builds/xen-orchestra-202605050700/packages/xen-api/_XapiError.mjs:16:12)\n    at default (file:///opt/xo/xo-builds/xen-orchestra-202605050700/packages/xen-api/_getTaskResult.mjs:13:29)\n    at Xapi._addRecordToCache (file:///opt/xo/xo-builds/xen-orchestra-202605050700/packages/xen-api/index.mjs:1078:24)\n    at file:///opt/xo/xo-builds/xen-orchestra-202605050700/packages/xen-api/index.mjs:1112:14\n    at Array.forEach (<anonymous>)\n    at Xapi._processEvents (file:///opt/xo/xo-builds/xen-orchestra-202605050700/packages/xen-api/index.mjs:1102:12)\n    at Xapi._watchEvents (file:///opt/xo/xo-builds/xen-orchestra-202605050700/packages/xen-api/index.mjs:1275:14)\n    at process.processTicksAndRejections (node:internal/process/task_queues:104:5)"
                  
                  1 Reply Last reply Reply Quote 0
                  • stormiS Offline
                    stormi Vates 🪐 XCP-ng Team
                    last edited by

                    Ping @Team-Storage

                    1 Reply Last reply Reply Quote 0
                    • A Offline
                      anthoineb Vates 🪐 XCP-ng Team @IgorGlock
                      last edited by

                      Hello @igorglock, Damien is in holiday this week but he identified the issue and a patch should be tested for the next release of SM.

                      1 Reply Last reply Reply Quote 1
                      • O Online
                        ovicz
                        last edited by olivierlambert

                        I get this in dmesg after the latest updates :

                        [   54.673443] python3[3691]: segfault at 200000 ip 00007f16eb8eca9f sp 00007ffd                                                                                                             b84e9ff0 error 4 in libpython3.6m.so.1.0[7f16eb804000+28d000]
                        [   54.673450] Code: 01 00 00 8d 5f ff 48 8d 2d de 3a 3c 00 c1 eb 03 44 8d 24 1b                                                                                                              4e 8b 44 e5 00 49 8b 70 10 49 39 f0 74 5f 49 8b 40 08 41 83 00 01 <48> 8b 38 48                                                                                                              85 ff 49 89 78 08 74 0d 48 83 c4 10 5b 5d 41 5c c3 0f
                        [   84.587661] xapi[3697]: segfault at 7f28cacaea40 ip 00007f28c6df0ec2 sp 00007                                                                                                             f289a5b8af0 error 6 in libjemalloc.so.2[7f28c6d85000+85000]
                        [   84.587669] Code: 48 2b 73 08 44 8b 4d 84 ba 01 00 00 00 49 83 c2 01 49 0f af                                                                                                              f1 4c 8d 0d ac 72 42 00 48 89 f1 48 c1 ee 26 48 c1 e9 20 48 d3 e2 <48> 31 54 f3                                                                                                              40 48 8b 8d 58 ff ff ff 48 8b 33 48 8d be 00 00 00 10
                        
                        1 Reply Last reply Reply Quote 0

                        Hello! It looks like you're interested in this conversation, but you don't have an account yet.

                        Getting fed up of having to scroll through the same posts each visit? When you register for an account, you'll always come back to exactly where you were before, and choose to be notified of new replies (either via email, or push notification). You'll also be able to save bookmarks and upvote posts to show your appreciation to other community members.

                        With your input, this post could be even better 💗

                        Register Login
                        • First post
                          Last post