Categories

  • All news regarding Xen and XCP-ng ecosystem

    143 Topics
    4k Posts
    I
    Hello @dthenot hello @danp I got the results. You can access our lab, feel free, nothing is productive. Support ID: 40797 High count of VDI / VMs on single SR make entire Storage slow. [image: 1778273434033-e36db14a-ddd3-4cac-b768-b7bc53b7278c-image-resized.jpeg] Migration fails because of: [21:52 xcp-ng-1 ~]# xe sr-scan uuid=43dbbe8e-039a-e66c-f6cb-d88de4f4d962 Error code: SR_BACKEND_FAILURE_1200 Error parameters: , list index out of range, 59 Lines of log: May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] ['/sbin/lvchange', '-an', '/dev/VG_XenStorage-43dbbe8e-039a-e66c-f6cb-d88de4f4d962/QCOW2-ffcf4c40-0d81-4da7-9908-478629c88358'] May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] pread SUCCESS May 8 21:40:34 xcp-ng-2 fairlock[8031]: /run/fairlock/devicemapper released May 8 21:40:34 xcp-ng-2 fairlock[8031]: /run/fairlock/devicemapper acquired May 8 21:40:34 xcp-ng-2 fairlock[8031]: /run/fairlock/devicemapper sent '56171 - 832.417448266' May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] ['/sbin/dmsetup', 'status', 'VG_XenStorage--43dbbe8e--039a--e66c--f6cb--d88de4f4d962-QCOW2--ffcf4c40--0d81--4da7--9908--478629c88358'] May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] pread SUCCESS May 8 21:40:34 xcp-ng-2 fairlock[8031]: /run/fairlock/devicemapper released May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] lock: released /var/lock/sm/lvm-43dbbe8e-039a-e66c-f6cb-d88de4f4d962/ffcf4c40-0d81-4da7-9908-478629c88358 May 8 21:40:34 xcp-ng-2 fairlock[8031]: /run/fairlock/devicemapper acquired May 8 21:40:34 xcp-ng-2 fairlock[8031]: /run/fairlock/devicemapper sent '56171 - 832.433077125' May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] ['/sbin/vgs', '--noheadings', '--nosuffix', '--units', 'b', 'VG_XenStorage-43dbbe8e-039a-e66c-f6cb-d88de4f4d962'] May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] pread SUCCESS May 8 21:40:34 xcp-ng-2 fairlock[8031]: /run/fairlock/devicemapper released May 8 21:40:34 xcp-ng-2 fairlock[8031]: /run/fairlock/devicemapper acquired May 8 21:40:34 xcp-ng-2 fairlock[8031]: /run/fairlock/devicemapper sent '56171 - 832.64258113' May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] ['/sbin/vgs', '--readonly', 'VG_XenStorage-43dbbe8e-039a-e66c-f6cb-d88de4f4d962'] May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] pread SUCCESS May 8 21:40:34 xcp-ng-2 fairlock[8031]: /run/fairlock/devicemapper released May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] lock: released /var/lock/sm/43dbbe8e-039a-e66c-f6cb-d88de4f4d962/sr May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] ***** generic exception: sr_scan: EXCEPTION <class 'IndexError'>, list index out of range May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] File "/opt/xensource/sm/SRCommand.py", line 113, in run May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] return self._run_locked(sr) May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] File "/opt/xensource/sm/SRCommand.py", line 163, in _run_locked May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] rv = self._run(sr, target) May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] File "/opt/xensource/sm/SRCommand.py", line 377, in _run May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] return sr.scan(self.params['sr_uuid']) May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] File "/opt/xensource/sm/LVMoHBASR", line 163, in scan May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] LVMSR.LVMSR.scan(self, sr_uuid) May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] File "/opt/xensource/sm/LVMSR.py", line 822, in scan May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] new_vdi = self.vdi(cbt_uuid) May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] File "/opt/xensource/sm/LVMoHBASR", line 230, in vdi May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] return LVMoHBAVDI(self, uuid) May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] File "/opt/xensource/sm/VDI.py", line 100, in __init__ May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] self.load(uuid) May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] File "/opt/xensource/sm/LVMSR.py", line 1375, in load May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] size = int(self.sr.srcmd.params['args'][0]) May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] ***** LVM over FC: EXCEPTION <class 'IndexError'>, list index out of range May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] File "/opt/xensource/sm/SRCommand.py", line 392, in run May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] ret = cmd.run(sr) May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] File "/opt/xensource/sm/SRCommand.py", line 113, in run May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] return self._run_locked(sr) May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] File "/opt/xensource/sm/SRCommand.py", line 163, in _run_locked May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] rv = self._run(sr, target) May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] File "/opt/xensource/sm/SRCommand.py", line 377, in _run May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] return sr.scan(self.params['sr_uuid']) May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] File "/opt/xensource/sm/LVMoHBASR", line 163, in scan May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] LVMSR.LVMSR.scan(self, sr_uuid) May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] File "/opt/xensource/sm/LVMSR.py", line 822, in scan May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] new_vdi = self.vdi(cbt_uuid) May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] File "/opt/xensource/sm/LVMoHBASR", line 230, in vdi May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] return LVMoHBAVDI(self, uuid) May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] File "/opt/xensource/sm/VDI.py", line 100, in __init__ May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] self.load(uuid) May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] File "/opt/xensource/sm/LVMSR.py", line 1375, in load May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] size = int(self.sr.srcmd.params['args'][0]) May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] lock: closed /var/lock/sm/lvm-43dbbe8e-039a-e66c-f6cb-d88de4f4d962/030878c2-49fd-4b0f-a971-9dfa121249a3 Best regards Igor
  • Everything related to the virtualization platform

    1k Topics
    15k Posts
    C
    Chuckz said: For example, important security features in Windows 11 such as core isolation do work on my Windows 11 guests... Edit: That is a typo. I meant to say that core isolation does not work on my Windows 11 guests, and I suspect it is because of the lack of the NV feature in Xen and XCP-ng. So my point is that over time, you can forget about running Windows on any other hypervisor except Hyper-V if it is true that we can never use NV in production.
  • 3k Topics
    28k Posts
    P
    @poddingue there is a notion of appliance too (group of VMs) https://docs.xcp-ng.org/appendix/cli_reference/#appliance-commands where you can start/stop a group of VMs, never tried it, doesn't seem to have a boot order in the vAPP neither
  • Our hyperconverged storage solution

    46 Topics
    740 Posts
    G
    @dthenot said: @ccooke Hello, You should be able to make the XOSTOR SR work again if you update sm and sm-fairlock on the other hosts. yum update sm sm-fairlock Then you should be able to re-plug the SR on the master and proceed with the RPU. Hello, Had the same problem, the command resolved the issue. It needs to be run on every host. Everything is working fine again. However, I had to complete the pool update manually.
  • 34 Topics
    102 Posts
    B
    La remarque a été intégrée dans l'article: https://www.myprivatelab.tech/xcp_lab_v2_ha#perte-master Merci encore pour le retour.