XCP-ng 8.3 updates announcements and testing
-
Sorry, about this. I think when migrating PV VMs from a previous system I had issues and "needed" to install "qemu-img" and I did not remember. So removing it, solved the problem. So all my fault. Sorry about this message!!!
-
Ran the 2 updates released today and...
Back to only showing one VM inBackup/Restoreas it did a month or 2 ago.
Ran the replication job and all VMs showed up inBackup/Restoreagain. (XO5) -
anyone know if applying these two patches requires rebooting?

-
@marcoi I've read in the blog post of Vates that no.
Did it anyway. -
Indeed, no reboot required if those are the only patches that you are applying, as indicated in the blog post.
-
@stormi I getting a response to reboot after applying to master.

-
XO isn't subtle and will ask for reboot regardless of the type of update (because for now there's no way to know if reboot is required)
-
@olivierlambert lol okay got it

-
I got the results. You can access our lab, feel free, nothing is productive. Support ID: 40797
High count of VDI / VMs on single SR make entire Storage slow.

Migration fails because of:
[21:52 xcp-ng-1 ~]# xe sr-scan uuid=43dbbe8e-039a-e66c-f6cb-d88de4f4d962
Error code: SR_BACKEND_FAILURE_1200
Error parameters: , list index out of range,59 Lines of log:
May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] ['/sbin/lvchange', '-an', '/dev/VG_XenStorage-43dbbe8e-039a-e66c-f6cb-d88de4f4d962/QCOW2-ffcf4c40-0d81-4da7-9908-478629c88358'] May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] pread SUCCESS May 8 21:40:34 xcp-ng-2 fairlock[8031]: /run/fairlock/devicemapper released May 8 21:40:34 xcp-ng-2 fairlock[8031]: /run/fairlock/devicemapper acquired May 8 21:40:34 xcp-ng-2 fairlock[8031]: /run/fairlock/devicemapper sent '56171 - 832.417448266' May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] ['/sbin/dmsetup', 'status', 'VG_XenStorage--43dbbe8e--039a--e66c--f6cb--d88de4f4d962-QCOW2--ffcf4c40--0d81--4da7--9908--478629c88358'] May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] pread SUCCESS May 8 21:40:34 xcp-ng-2 fairlock[8031]: /run/fairlock/devicemapper released May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] lock: released /var/lock/sm/lvm-43dbbe8e-039a-e66c-f6cb-d88de4f4d962/ffcf4c40-0d81-4da7-9908-478629c88358 May 8 21:40:34 xcp-ng-2 fairlock[8031]: /run/fairlock/devicemapper acquired May 8 21:40:34 xcp-ng-2 fairlock[8031]: /run/fairlock/devicemapper sent '56171 - 832.433077125' May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] ['/sbin/vgs', '--noheadings', '--nosuffix', '--units', 'b', 'VG_XenStorage-43dbbe8e-039a-e66c-f6cb-d88de4f4d962'] May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] pread SUCCESS May 8 21:40:34 xcp-ng-2 fairlock[8031]: /run/fairlock/devicemapper released May 8 21:40:34 xcp-ng-2 fairlock[8031]: /run/fairlock/devicemapper acquired May 8 21:40:34 xcp-ng-2 fairlock[8031]: /run/fairlock/devicemapper sent '56171 - 832.64258113' May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] ['/sbin/vgs', '--readonly', 'VG_XenStorage-43dbbe8e-039a-e66c-f6cb-d88de4f4d962'] May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] pread SUCCESS May 8 21:40:34 xcp-ng-2 fairlock[8031]: /run/fairlock/devicemapper released May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] lock: released /var/lock/sm/43dbbe8e-039a-e66c-f6cb-d88de4f4d962/sr May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] ***** generic exception: sr_scan: EXCEPTION <class 'IndexError'>, list index out of range May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] File "/opt/xensource/sm/SRCommand.py", line 113, in run May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] return self._run_locked(sr) May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] File "/opt/xensource/sm/SRCommand.py", line 163, in _run_locked May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] rv = self._run(sr, target) May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] File "/opt/xensource/sm/SRCommand.py", line 377, in _run May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] return sr.scan(self.params['sr_uuid']) May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] File "/opt/xensource/sm/LVMoHBASR", line 163, in scan May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] LVMSR.LVMSR.scan(self, sr_uuid) May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] File "/opt/xensource/sm/LVMSR.py", line 822, in scan May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] new_vdi = self.vdi(cbt_uuid) May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] File "/opt/xensource/sm/LVMoHBASR", line 230, in vdi May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] return LVMoHBAVDI(self, uuid) May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] File "/opt/xensource/sm/VDI.py", line 100, in __init__ May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] self.load(uuid) May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] File "/opt/xensource/sm/LVMSR.py", line 1375, in load May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] size = int(self.sr.srcmd.params['args'][0]) May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] ***** LVM over FC: EXCEPTION <class 'IndexError'>, list index out of range May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] File "/opt/xensource/sm/SRCommand.py", line 392, in run May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] ret = cmd.run(sr) May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] File "/opt/xensource/sm/SRCommand.py", line 113, in run May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] return self._run_locked(sr) May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] File "/opt/xensource/sm/SRCommand.py", line 163, in _run_locked May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] rv = self._run(sr, target) May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] File "/opt/xensource/sm/SRCommand.py", line 377, in _run May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] return sr.scan(self.params['sr_uuid']) May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] File "/opt/xensource/sm/LVMoHBASR", line 163, in scan May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] LVMSR.LVMSR.scan(self, sr_uuid) May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] File "/opt/xensource/sm/LVMSR.py", line 822, in scan May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] new_vdi = self.vdi(cbt_uuid) May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] File "/opt/xensource/sm/LVMoHBASR", line 230, in vdi May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] return LVMoHBAVDI(self, uuid) May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] File "/opt/xensource/sm/VDI.py", line 100, in __init__ May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] self.load(uuid) May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] File "/opt/xensource/sm/LVMSR.py", line 1375, in load May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] size = int(self.sr.srcmd.params['args'][0]) May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] lock: closed /var/lock/sm/lvm-43dbbe8e-039a-e66c-f6cb-d88de4f4d962/030878c2-49fd-4b0f-a971-9dfa121249a3Best regards
Igor
-
@olivierlambert Seeing some failure/errors on CR jobs. It leaves VDIs attached to Control Domain... Next run it normally works. I have not seen this error until after the current sm update. Running XO (commit 7e144).
"message": "INTERNAL_ERROR(Storage_error ([S(Illegal_transition); [[S(Activated);S(RO)];[S(Activated);S(RW)]]]))", "name": "XapiError", "stack": "XapiError: INTERNAL_ERROR(Storage_error ([S(Illegal_transition);[[S(Activated);S(RO)];[S(Activated);S(RW)]]]))\n at XapiError.wrap (file:///opt/xo/xo-builds/xen-orchestra-202605050700/packages/xen-api/_XapiError.mjs:16:12)\n at default (file:///opt/xo/xo-builds/xen-orchestra-202605050700/packages/xen-api/_getTaskResult.mjs:13:29)\n at Xapi._addRecordToCache (file:///opt/xo/xo-builds/xen-orchestra-202605050700/packages/xen-api/index.mjs:1078:24)\n at file:///opt/xo/xo-builds/xen-orchestra-202605050700/packages/xen-api/index.mjs:1112:14\n at Array.forEach (<anonymous>)\n at Xapi._processEvents (file:///opt/xo/xo-builds/xen-orchestra-202605050700/packages/xen-api/index.mjs:1102:12)\n at Xapi._watchEvents (file:///opt/xo/xo-builds/xen-orchestra-202605050700/packages/xen-api/index.mjs:1275:14)\n at process.processTicksAndRejections (node:internal/process/task_queues:104:5)"
Hello! It looks like you're interested in this conversation, but you don't have an account yet.
Getting fed up of having to scroll through the same posts each visit? When you register for an account, you'll always come back to exactly where you were before, and choose to be notified of new replies (either via email, or push notification). You'll also be able to save bookmarks and upvote posts to show your appreciation to other community members.
With your input, this post could be even better 💗
Register Login