Categories

  • All news regarding Xen and XCP-ng ecosystem

    143 Topics
    4k Posts
    I
    Hello @dthenot hello @danp I got the results. You can access our lab, feel free, nothing is productive. Support ID: 40797 High count of VDI / VMs on single SR make entire Storage slow. [image: 1778273434033-e36db14a-ddd3-4cac-b768-b7bc53b7278c-image-resized.jpeg] Migration fails because of: [21:52 xcp-ng-1 ~]# xe sr-scan uuid=43dbbe8e-039a-e66c-f6cb-d88de4f4d962 Error code: SR_BACKEND_FAILURE_1200 Error parameters: , list index out of range, 59 Lines of log: May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] ['/sbin/lvchange', '-an', '/dev/VG_XenStorage-43dbbe8e-039a-e66c-f6cb-d88de4f4d962/QCOW2-ffcf4c40-0d81-4da7-9908-478629c88358'] May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] pread SUCCESS May 8 21:40:34 xcp-ng-2 fairlock[8031]: /run/fairlock/devicemapper released May 8 21:40:34 xcp-ng-2 fairlock[8031]: /run/fairlock/devicemapper acquired May 8 21:40:34 xcp-ng-2 fairlock[8031]: /run/fairlock/devicemapper sent '56171 - 832.417448266' May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] ['/sbin/dmsetup', 'status', 'VG_XenStorage--43dbbe8e--039a--e66c--f6cb--d88de4f4d962-QCOW2--ffcf4c40--0d81--4da7--9908--478629c88358'] May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] pread SUCCESS May 8 21:40:34 xcp-ng-2 fairlock[8031]: /run/fairlock/devicemapper released May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] lock: released /var/lock/sm/lvm-43dbbe8e-039a-e66c-f6cb-d88de4f4d962/ffcf4c40-0d81-4da7-9908-478629c88358 May 8 21:40:34 xcp-ng-2 fairlock[8031]: /run/fairlock/devicemapper acquired May 8 21:40:34 xcp-ng-2 fairlock[8031]: /run/fairlock/devicemapper sent '56171 - 832.433077125' May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] ['/sbin/vgs', '--noheadings', '--nosuffix', '--units', 'b', 'VG_XenStorage-43dbbe8e-039a-e66c-f6cb-d88de4f4d962'] May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] pread SUCCESS May 8 21:40:34 xcp-ng-2 fairlock[8031]: /run/fairlock/devicemapper released May 8 21:40:34 xcp-ng-2 fairlock[8031]: /run/fairlock/devicemapper acquired May 8 21:40:34 xcp-ng-2 fairlock[8031]: /run/fairlock/devicemapper sent '56171 - 832.64258113' May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] ['/sbin/vgs', '--readonly', 'VG_XenStorage-43dbbe8e-039a-e66c-f6cb-d88de4f4d962'] May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] pread SUCCESS May 8 21:40:34 xcp-ng-2 fairlock[8031]: /run/fairlock/devicemapper released May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] lock: released /var/lock/sm/43dbbe8e-039a-e66c-f6cb-d88de4f4d962/sr May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] ***** generic exception: sr_scan: EXCEPTION <class 'IndexError'>, list index out of range May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] File "/opt/xensource/sm/SRCommand.py", line 113, in run May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] return self._run_locked(sr) May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] File "/opt/xensource/sm/SRCommand.py", line 163, in _run_locked May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] rv = self._run(sr, target) May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] File "/opt/xensource/sm/SRCommand.py", line 377, in _run May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] return sr.scan(self.params['sr_uuid']) May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] File "/opt/xensource/sm/LVMoHBASR", line 163, in scan May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] LVMSR.LVMSR.scan(self, sr_uuid) May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] File "/opt/xensource/sm/LVMSR.py", line 822, in scan May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] new_vdi = self.vdi(cbt_uuid) May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] File "/opt/xensource/sm/LVMoHBASR", line 230, in vdi May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] return LVMoHBAVDI(self, uuid) May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] File "/opt/xensource/sm/VDI.py", line 100, in __init__ May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] self.load(uuid) May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] File "/opt/xensource/sm/LVMSR.py", line 1375, in load May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] size = int(self.sr.srcmd.params['args'][0]) May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] ***** LVM over FC: EXCEPTION <class 'IndexError'>, list index out of range May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] File "/opt/xensource/sm/SRCommand.py", line 392, in run May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] ret = cmd.run(sr) May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] File "/opt/xensource/sm/SRCommand.py", line 113, in run May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] return self._run_locked(sr) May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] File "/opt/xensource/sm/SRCommand.py", line 163, in _run_locked May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] rv = self._run(sr, target) May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] File "/opt/xensource/sm/SRCommand.py", line 377, in _run May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] return sr.scan(self.params['sr_uuid']) May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] File "/opt/xensource/sm/LVMoHBASR", line 163, in scan May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] LVMSR.LVMSR.scan(self, sr_uuid) May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] File "/opt/xensource/sm/LVMSR.py", line 822, in scan May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] new_vdi = self.vdi(cbt_uuid) May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] File "/opt/xensource/sm/LVMoHBASR", line 230, in vdi May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] return LVMoHBAVDI(self, uuid) May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] File "/opt/xensource/sm/VDI.py", line 100, in __init__ May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] self.load(uuid) May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] File "/opt/xensource/sm/LVMSR.py", line 1375, in load May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] size = int(self.sr.srcmd.params['args'][0]) May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] May 8 21:40:34 xcp-ng-2 SM: [56171][MainThread] lock: closed /var/lock/sm/lvm-43dbbe8e-039a-e66c-f6cb-d88de4f4d962/030878c2-49fd-4b0f-a971-9dfa121249a3 Best regards Igor
  • Everything related to the virtualization platform

    1k Topics
    15k Posts
    C
    @planedrop said: @Chuckz Yeah it would be a nice feature to see. I think the issue though is how much work it takes when it's not something anyone should be using in production. It's really just a heavy homelab feature. I want it to work, don't get me wrong, but no big org should be doing nested virt, it's just not a good idea and even Hyper-V recommends against it. For now, I think you are right except for Windows-centric shops. Going forward, there is no doubt that running Windows will be only on Hyper-V unless third party hypervisors can maintain support for the increasing number of features in Windows that rely on NV support. For example, important security features in Windows 11 such as core isolation do work on my Windows 11 guests, I suspect also because of lack of NV support in Xen. I also think over time this NV feature will become important also for other platforms that depend more on Linux than Windows does. Has anyone here seen Windows 11 core isolation working on XCP-ng? One can check on a Windows 11 XCP-ng guest by looking at Windows Security -> Device Security -> Core Isolation -> Core Isolation Details in the Windows guest. I bet in every case it reports that it does not work. When I try to enable it, it successfully enables it and notifies me I need to reboot for the new setting to take effect, but when I reboot the core isolation feature is disabled again. Apparently Windows virtualization, while important, is not important enough for deep-pocketed customers to push for this feature in XCP-ng and Xen upstream. The question I raise is, can a group of XCP-ng users, perhaps working in their home labs, get the ball rolling in upstream Xen without deep-pocket customers asking them to add the NV feature to Xen? I am hoping yes, because I think the Xen upstream developers really want to add this feature (Vates too, because it is a big negative for XCP-ng compared to other options that do support these Windows features that depend on NV). But the Xen developers do not have the time to work on NV without deep-pocket customers asking for the feature. We can greatly improve the probability that the Xen developers will work on NV if we can do some of the work for them. I think there are some things we can do to help the Xen developers support NV. This is what I am proposing.
  • 3k Topics
    28k Posts
    P
    @poddingue there is a notion of appliance too (group of VMs) https://docs.xcp-ng.org/appendix/cli_reference/#appliance-commands where you can start/stop a group of VMs, never tried it, doesn't seem to have a boot order in the vAPP neither
  • Our hyperconverged storage solution

    46 Topics
    740 Posts
    G
    @dthenot said: @ccooke Hello, You should be able to make the XOSTOR SR work again if you update sm and sm-fairlock on the other hosts. yum update sm sm-fairlock Then you should be able to re-plug the SR on the master and proceed with the RPU. Hello, Had the same problem, the command resolved the issue. It needs to be run on every host. Everything is working fine again. However, I had to complete the pool update manually.
  • 34 Topics
    102 Posts
    B
    La remarque a été intégrée dans l'article: https://www.myprivatelab.tech/xcp_lab_v2_ha#perte-master Merci encore pour le retour.