XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    LargeBlockSR for 4KiB blocksize disks

    Scheduled Pinned Locked Moved News
    14 Posts 5 Posters 1.4k Views 5 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • dthenotD Offline
      dthenot Vates 🪐 XCP-ng Team
      last edited by dthenot

      Hello again,

      It is now available in 8.2.1 with the testing packages, you can install them by enabling the testing repository and updating.
      Available in sm 2.30.8-10.2.

      yum update --enablerepo=xcp-ng-testing sm xapi-core xapi-xe xapi-doc
      

      You then need to restart the toolstack.
      Afterwards, you can create SR with the command in the above post.

      stormiS 1 Reply Last reply Reply Quote 2
      • stormiS stormi referenced this topic on
      • stormiS Offline
        stormi Vates 🪐 XCP-ng Team @dthenot
        last edited by

        @dthenot said in LargeBlockSR for 4KiB blocksize disks:

        Hello again,

        It is now available in 8.2.1 with the testing packages, you can install them by enabling the testing repository and updating.
        Available in sm 2.30.8-10.2.

        yum update --enablerepo=xcp-ng-testing sm xapi-core xapi-xe xapi-doc
        

        You then need to restart the toolstack.
        Afterwards, you can create SR with the command in the above post.

        Update: now the driver is available on any up to date XCP-ng 8.2.1 or 8.3. No need to try to update from testing repositories (you might get something unexpected).

        G 1 Reply Last reply Reply Quote 2
        • G Offline
          Greg_E @stormi
          last edited by

          @stormi

          Is there a way to enable this on an SR without using the xe cli? Can we specify this as an option during SR creation from XO? Or did this become the normal way to create an SR? I have both 8.2.x and 8.3, Truenas Scale and NFS shares on both.

          stormiS 1 Reply Last reply Reply Quote 0
          • stormiS Offline
            stormi Vates 🪐 XCP-ng Team @Greg_E
            last edited by

            @Greg_E It is a separate SR type that you can select when you create a SR in Xen Orchestra.

            It's also just local SR. For NFS you still use the NFS storage driver.

            G 1 Reply Last reply Reply Quote 0
            • G Offline
              Greg_E @stormi
              last edited by

              @stormi

              Thanks, I was wondering if that was the case from reading the other threads.

              1 Reply Last reply Reply Quote 0
              • Y Offline
                yllar
                last edited by yllar

                @dthenot Are multiple SR-s with type=largeblock supported on the same host?

                1 Reply Last reply Reply Quote 0
                • olivierlambertO Offline
                  olivierlambert Vates 🪐 Co-Founder CEO
                  last edited by

                  I don't see any reason it couldn't, but I prefer @dthenot to answer 🙂

                  1 Reply Last reply Reply Quote 0
                  • Y yllar referenced this topic on
                  • Y Offline
                    yllar
                    last edited by

                    @dthenot

                    NVMe drives attached to the PERC H965i are identified as SCSI disks in the operating system
                    OS exposes a 4kn nvme device on /dev/disk/by-id/scsi-36f4ee0807844d90068121b9321711e05

                    Below is a log of creating a type=largeblock SR on latest XCP-ng 8.2

                    It takes about 10 minutes and some errors but it does successfully create the SR and we are able to use it.
                    Are all these errors expected and we can trust it's working normally?

                    Console:

                    # xe sr-create host-uuid=383399d1-b304-48db-ad4b-bc8fe8b56f89 type=largeblock name-label="Local Main Name storage" device-config:device=/dev/disk/by-id/scsi-36f4ee0807844d90068121b9321711e05
                    42535e39-4c98-22c6-71eb-303caa3fc97b
                    

                    SM.log

                    May  2 08:22:29 a1 SM: [15928] lock: opening lock file /var/lock/sm/42535e39-4c98-22c6-71eb-303caa3fc97b/sr
                    May  2 08:22:29 a1 SM: [15928] lock: acquired /var/lock/sm/42535e39-4c98-22c6-71eb-303caa3fc97b/sr
                    May  2 08:22:29 a1 SM: [15928] sr_create {'sr_uuid': '42535e39-4c98-22c6-71eb-303caa3fc97b', 'subtask_of': 'DummyRef:|2d81471d-e02c-4c9c-8273-51527e849c1d|SR.create', 'args': ['0'], 'host_ref': 'OpaqueRef:a26b109a-5fb2-4644-8e7e-3cc251e43d5c', 'session_ref': 'OpaqueRef:61307cea-d759-4f9a-9052-e44c0c574b1f', 'device_config': {'device': '/dev/disk/by-id/scsi-36f4ee0807844d90068121b9321711e05', 'SRmaster': 'true'}, 'command': 'sr_create', 'sr_ref': 'OpaqueRef:339f5ed6-e8e6-4421-932f-d64f7ef60013'}
                    May  2 08:22:29 a1 SM: [15928] ['blockdev', '--getss', '/dev/disk/by-id/scsi-36f4ee0807844d90068121b9321711e05']
                    May  2 08:22:29 a1 SM: [15928]   pread SUCCESS
                    May  2 08:22:29 a1 SM: [15928] ['losetup', '-f', '-v', '--show', '--sector-size', '512', '/dev/disk/by-id/scsi-36f4ee0807844d90068121b9321711e05']
                    May  2 08:22:29 a1 SM: [15928]   pread SUCCESS
                    May  2 08:22:29 a1 SM: [15928] util.test_scsiserial: Not a serial device: /dev/loop0
                    May  2 08:22:29 a1 SM: [15928] lock: opening lock file /var/lock/sm/.nil/lvm
                    May  2 08:22:29 a1 SM: [15928] ['/sbin/vgs', '--readonly', 'XSLocalLargeBlock-42535e39-4c98-22c6-71eb-303caa3fc97b']
                    May  2 08:22:29 a1 SM: [15928] FAILED in util.pread: (rc 5) stdout: '', stderr: '  WARNING: Not using device /dev/sda for PV HMeziz-gDTa-cNLl-1B2E-ebbh-e5ki-RIhK5T.
                    May  2 08:22:29 a1 SM: [15928]   WARNING: PV HMeziz-gDTa-cNLl-1B2E-ebbh-e5ki-RIhK5T prefers device /dev/loop0 because device was seen first.
                    May  2 08:22:29 a1 SM: [15928]   Volume group "XSLocalLargeBlock-42535e39-4c98-22c6-71eb-303caa3fc97b" not found
                    May  2 08:22:29 a1 SM: [15928]   Cannot process volume group XSLocalLargeBlock-42535e39-4c98-22c6-71eb-303caa3fc97b
                    May  2 08:22:29 a1 SM: [15928] '
                    May  2 08:22:29 a1 SM: [15928] ['/bin/dd', 'if=/dev/zero', 'of=/dev/disk/by-id/scsi-36f4ee0807844d90068121b9321711e05.512', 'bs=1M', 'count=10', 'oflag=direct']
                    May  2 08:22:29 a1 SM: [15928]   pread SUCCESS
                    May  2 08:22:29 a1 SM: [15928] lock: acquired /var/lock/sm/.nil/lvm
                    May  2 08:22:29 a1 SM: [15928] ['/sbin/vgcreate', '--metadatasize', '10M', 'XSLocalLargeBlock-42535e39-4c98-22c6-71eb-303caa3fc97b', '/dev/disk/by-id/scsi-36f4ee0807844d90068121b9321711e05.512']
                    May  2 08:22:29 a1 SM: [15928]   pread SUCCESS
                    May  2 08:22:29 a1 SM: [15928] lock: released /var/lock/sm/.nil/lvm
                    May  2 08:22:29 a1 SM: [15928] lock: acquired /var/lock/sm/.nil/lvm
                    May  2 08:22:29 a1 SM: [15928] ['/sbin/vgchange', '-an', 'XSLocalLargeBlock-42535e39-4c98-22c6-71eb-303caa3fc97b']
                    May  2 08:22:29 a1 SM: [15928]   pread SUCCESS
                    May  2 08:22:29 a1 SM: [15928] lock: released /var/lock/sm/.nil/lvm
                    May  2 08:22:29 a1 SM: [15928] lock: acquired /var/lock/sm/.nil/lvm
                    May  2 08:22:29 a1 SM: [15928] ['/sbin/lvdisplay', '/dev/XSLocalLargeBlock-42535e39-4c98-22c6-71eb-303caa3fc97b/42535e39-4c98-22c6-71eb-303caa3fc97b']
                    May  2 08:22:29 a1 SM: [15928] FAILED in util.pread: (rc 5) stdout: '', stderr: '  WARNING: Not using device /dev/sda for PV tJUoM9-WUfs-XYZy-vXje-p040-30ui-dder94.
                    May  2 08:22:29 a1 SM: [15928]   WARNING: PV tJUoM9-WUfs-XYZy-vXje-p040-30ui-dder94 prefers device /dev/loop0 because device was seen first.
                    May  2 08:22:29 a1 SM: [15928]   Failed to find logical volume "XSLocalLargeBlock-42535e39-4c98-22c6-71eb-303caa3fc97b/42535e39-4c98-22c6-71eb-303caa3fc97b"
                    May  2 08:22:29 a1 SM: [15928] '
                    May  2 08:22:29 a1 SM: [15928] lock: released /var/lock/sm/.nil/lvm
                    May  2 08:22:29 a1 SM: [15928] lock: acquired /var/lock/sm/.nil/lvm
                    May  2 08:22:29 a1 SM: [15928] ['/sbin/vgs', '--noheadings', '--nosuffix', '--units', 'b', 'XSLocalLargeBlock-42535e39-4c98-22c6-71eb-303caa3fc97b']
                    May  2 08:22:29 a1 SM: [15928]   pread SUCCESS
                    May  2 08:22:29 a1 SM: [15928] lock: released /var/lock/sm/.nil/lvm
                    May  2 08:22:29 a1 SM: [15928] ['lvcreate', '-n', '42535e39-4c98-22c6-71eb-303caa3fc97b', '-L', '29302004', 'XSLocalLargeBlock-42535e39-4c98-22c6-71eb-303caa3fc97b']
                    May  2 08:22:29 a1 SM: [15928]   pread SUCCESS
                    May  2 08:22:29 a1 SM: [15928] ['lvchange', '-ay', '/dev/XSLocalLargeBlock-42535e39-4c98-22c6-71eb-303caa3fc97b/42535e39-4c98-22c6-71eb-303caa3fc97b']
                    May  2 08:22:29 a1 SM: [15928]   pread SUCCESS
                    May  2 08:22:29 a1 SM: [15928] ['mkfs.ext4', '-F', '/dev/XSLocalLargeBlock-42535e39-4c98-22c6-71eb-303caa3fc97b/42535e39-4c98-22c6-71eb-303caa3fc97b']
                    
                    
                    May  2 08:31:40 a1 SM: [18985] lock: opening lock file /var/lock/sm/07ab18c4-a76f-d1fc-4374-babfe21fd679/sr
                    May  2 08:31:40 a1 SM: [18985] LVMCache created for VG_XenStorage-07ab18c4-a76f-d1fc-4374-babfe21fd679
                    May  2 08:31:40 a1 SM: [18985] lock: opening lock file /var/lock/sm/.nil/lvm
                    May  2 08:31:40 a1 SM: [18985] ['/sbin/vgs', '--readonly', 'VG_XenStorage-07ab18c4-a76f-d1fc-4374-babfe21fd679']
                    May  2 08:32:24 a1 SM: [18985]   pread SUCCESS
                    May  2 08:32:24 a1 SM: [15928]   pread SUCCESS
                    May  2 08:32:24 a1 SM: [15928] ['/usr/lib/udev/scsi_id', '-g', '--device', '/dev/disk/by-id/scsi-36f4ee0807844d90068121b9321711e05.512']
                    May  2 08:32:24 a1 SM: [18985] ***** Long LVM call of 'vgs' took 43.6255850792
                    May  2 08:32:24 a1 SM: [18985] Entering _checkMetadataVolume
                    May  2 08:32:24 a1 SM: [18985] LVMCache: will initialize now
                    May  2 08:32:24 a1 SM: [18985] LVMCache: refreshing
                    May  2 08:32:24 a1 SM: [18985] lock: acquired /var/lock/sm/.nil/lvm
                    May  2 08:32:24 a1 SM: [18985] ['/sbin/lvs', '--noheadings', '--units', 'b', '-o', '+lv_tags', '/dev/VG_XenStorage-07ab18c4-a76f-d1fc-4374-babfe21fd679']
                    May  2 08:32:24 a1 SM: [15928] FAILED in util.pread: (rc 1) stdout: '', stderr: ''
                    May  2 08:32:24 a1 SM: [15928] ['losetup', '--list']
                    May  2 08:32:24 a1 SM: [15928]   pread SUCCESS
                    May  2 08:32:24 a1 SM: [15928] ['losetup', '-d', '/dev/loop0']
                    May  2 08:32:24 a1 SM: [18985]   pread SUCCESS
                    May  2 08:32:24 a1 SM: [18985] lock: released /var/lock/sm/.nil/lvm
                    May  2 08:32:24 a1 SM: [15928]   pread SUCCESS
                    May  2 08:32:24 a1 SM: [15928] lock: released /var/lock/sm/42535e39-4c98-22c6-71eb-303caa3fc97b/sr
                    May  2 08:32:24 a1 SM: [19294] lock: opening lock file /var/lock/sm/42535e39-4c98-22c6-71eb-303caa3fc97b/sr
                    May  2 08:32:24 a1 SM: [19294] lock: acquired /var/lock/sm/42535e39-4c98-22c6-71eb-303caa3fc97b/sr
                    May  2 08:32:24 a1 SM: [19294] sr_attach {'sr_uuid': '42535e39-4c98-22c6-71eb-303caa3fc97b', 'subtask_of': 'DummyRef:|ab5bca7f-6597-4874-948a-b4c8a0b4283e|SR.attach', 'args': [], 'host_ref': 'OpaqueRef:a26b109a-5fb2-4644-8e7e-3cc251e43d5c', 'session_ref': 'OpaqueRef:5abf6e38-a9b0-44ff-a095-e95786bb30f7', 'device_config': {'device': '/dev/disk/by-id/scsi-36f4ee0807844d90068121b9321711e05', 'SRmaster': 'true'}, 'command': 'sr_attach', 'sr_ref': 'OpaqueRef:339f5ed6-e8e6-4421-932f-d64f7ef60013'}
                    May  2 08:32:24 a1 SMGC: [19294] === SR 42535e39-4c98-22c6-71eb-303caa3fc97b: abort ===
                    May  2 08:32:24 a1 SM: [19294] lock: opening lock file /var/lock/sm/42535e39-4c98-22c6-71eb-303caa3fc97b/running
                    May  2 08:32:24 a1 SM: [19294] lock: opening lock file /var/lock/sm/42535e39-4c98-22c6-71eb-303caa3fc97b/gc_active
                    May  2 08:32:24 a1 SM: [19294] lock: tried lock /var/lock/sm/42535e39-4c98-22c6-71eb-303caa3fc97b/gc_active, acquired: True (exists: True)
                    May  2 08:32:24 a1 SMGC: [19294] abort: releasing the process lock
                    May  2 08:32:24 a1 SM: [19294] lock: released /var/lock/sm/42535e39-4c98-22c6-71eb-303caa3fc97b/gc_active
                    May  2 08:32:24 a1 SM: [19294] lock: acquired /var/lock/sm/42535e39-4c98-22c6-71eb-303caa3fc97b/running
                    May  2 08:32:24 a1 SM: [19294] RESET for SR 42535e39-4c98-22c6-71eb-303caa3fc97b (master: True)
                    May  2 08:32:24 a1 SM: [19294] lock: released /var/lock/sm/42535e39-4c98-22c6-71eb-303caa3fc97b/running
                    May  2 08:32:24 a1 SM: [19294] set_dirty 'OpaqueRef:339f5ed6-e8e6-4421-932f-d64f7ef60013' succeeded
                    May  2 08:32:24 a1 SM: [19294] ['vgs', '--noheadings', '-o', 'vg_name,devices', 'XSLocalLargeBlock-42535e39-4c98-22c6-71eb-303caa3fc97b']
                    May  2 08:32:24 a1 SM: [19294]   pread SUCCESS
                    May  2 08:32:24 a1 SM: [19294] ['losetup', '--list']
                    May  2 08:32:24 a1 SM: [19294]   pread SUCCESS
                    May  2 08:32:24 a1 SM: [19294] ['losetup', '-f', '-v', '--show', '--sector-size', '512', '/dev/sda']
                    May  2 08:32:24 a1 SM: [19294]   pread SUCCESS
                    May  2 08:32:24 a1 SM: [19294] ['vgs', '--noheadings', '-o', 'vg_name,devices', 'XSLocalLargeBlock-42535e39-4c98-22c6-71eb-303caa3fc97b']
                    May  2 08:32:24 a1 SM: [19294]   pread SUCCESS
                    May  2 08:32:24 a1 SM: [19294] ['lvchange', '-ay', '/dev/XSLocalLargeBlock-42535e39-4c98-22c6-71eb-303caa3fc97b/42535e39-4c98-22c6-71eb-303caa3fc97b']
                    May  2 08:32:24 a1 SM: [19294]   pread SUCCESS
                    May  2 08:32:24 a1 SM: [19294] ['fsck', '-a', '/dev/XSLocalLargeBlock-42535e39-4c98-22c6-71eb-303caa3fc97b/42535e39-4c98-22c6-71eb-303caa3fc97b']
                    May  2 08:32:24 a1 SM: [19294]   pread SUCCESS
                    May  2 08:32:24 a1 SM: [19294] ['mount', '/dev/XSLocalLargeBlock-42535e39-4c98-22c6-71eb-303caa3fc97b/42535e39-4c98-22c6-71eb-303caa3fc97b', '/var/run/sr-mount/42535e39-4c98-22c6-71eb-303caa3fc97b']
                    May  2 08:32:24 a1 SM: [19294]   pread SUCCESS
                    May  2 08:32:24 a1 SM: [19294] ['/usr/lib/udev/scsi_id', '-g', '--device', '/dev/sda.512']
                    May  2 08:32:24 a1 SM: [19294] FAILED in util.pread: (rc 1) stdout: '', stderr: ''
                    May  2 08:32:24 a1 SM: [19294] Dom0 disks: ['/dev/nvme0n1p']
                    May  2 08:32:24 a1 SM: [19294] Block scheduler: /dev/sda.512 (/dev/loop) wants noop
                    May  2 08:32:24 a1 SM: [19294] no path /sys/block/loop/queue/scheduler
                    May  2 08:32:24 a1 SM: [19294] ['/usr/bin/vhd-util', 'scan', '-f', '-m', '/var/run/sr-mount/42535e39-4c98-22c6-71eb-303caa3fc97b/*.vhd']
                    May  2 08:32:24 a1 SM: [19294]   pread SUCCESS
                    May  2 08:32:24 a1 SM: [19294] ['ls', '/var/run/sr-mount/42535e39-4c98-22c6-71eb-303caa3fc97b', '-1', '--color=never']
                    May  2 08:32:24 a1 SM: [19294]   pread SUCCESS
                    May  2 08:32:24 a1 SM: [19294] lock: tried lock /var/lock/sm/42535e39-4c98-22c6-71eb-303caa3fc97b/running, acquired: True (exists: True)
                    May  2 08:32:24 a1 SM: [19294] lock: released /var/lock/sm/42535e39-4c98-22c6-71eb-303caa3fc97b/running
                    May  2 08:32:24 a1 SM: [19294] Kicking GC
                    May  2 08:32:24 a1 SMGC: [19294] === SR 42535e39-4c98-22c6-71eb-303caa3fc97b: gc ===
                    May  2 08:32:24 a1 SMGC: [19335] Will finish as PID [19336]
                    May  2 08:32:24 a1 SMGC: [19294] New PID [19335]
                    May  2 08:32:24 a1 SM: [19294] lock: released /var/lock/sm/42535e39-4c98-22c6-71eb-303caa3fc97b/sr
                    May  2 08:32:25 a1 SM: [19336] lock: opening lock file /var/lock/sm/42535e39-4c98-22c6-71eb-303caa3fc97b/sr
                    May  2 08:32:25 a1 SMGC: [19336] *~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*
                    May  2 08:32:25 a1 SMGC: [19336]          ***********************
                    May  2 08:32:25 a1 SMGC: [19336]          *  E X C E P T I O N  *
                    May  2 08:32:25 a1 SMGC: [19336]          ***********************
                    May  2 08:32:25 a1 SMGC: [19336] gc: EXCEPTION <class 'util.SMException'>, SR 42535e39-4c98-22c6-71eb-303caa3fc97b not attached on this host
                    May  2 08:32:25 a1 SMGC: [19336]   File "/opt/xensource/sm/cleanup.py", line 3388, in gc
                    May  2 08:32:25 a1 SMGC: [19336]     _gc(None, srUuid, dryRun)
                    May  2 08:32:25 a1 SMGC: [19336]   File "/opt/xensource/sm/cleanup.py", line 3267, in _gc
                    May  2 08:32:25 a1 SMGC: [19336]     sr = SR.getInstance(srUuid, session)
                    May  2 08:32:25 a1 SMGC: [19336]   File "/opt/xensource/sm/cleanup.py", line 1552, in getInstance
                    May  2 08:32:25 a1 SMGC: [19336]     return FileSR(uuid, xapi, createLock, force)
                    May  2 08:32:25 a1 SMGC: [19336]   File "/opt/xensource/sm/cleanup.py", line 2334, in __init__
                    May  2 08:32:25 a1 SMGC: [19336]     SR.__init__(self, uuid, xapi, createLock, force)
                    May  2 08:32:25 a1 SMGC: [19336]   File "/opt/xensource/sm/cleanup.py", line 1582, in __init__
                    May  2 08:32:25 a1 SMGC: [19336]     raise util.SMException("SR %s not attached on this host" % uuid)
                    May  2 08:32:25 a1 SMGC: [19336]
                    May  2 08:32:25 a1 SMGC: [19336] *~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*
                    May  2 08:32:25 a1 SMGC: [19336] * * * * * SR 42535e39-4c98-22c6-71eb-303caa3fc97b: ERROR
                    May  2 08:32:25 a1 SMGC: [19336]
                    May  2 08:32:25 a1 SM: [19367] lock: opening lock file /var/lock/sm/42535e39-4c98-22c6-71eb-303caa3fc97b/sr
                    May  2 08:32:25 a1 SM: [19367] sr_update {'sr_uuid': '42535e39-4c98-22c6-71eb-303caa3fc97b', 'subtask_of': 'DummyRef:|f960ef27-5d11-461d-9d4f-072e24be96b0|SR.stat', 'args': [], 'host_ref': 'OpaqueRef:a26b109a-5fb2-4644-8e7e-3cc251e43d5c', 'session_ref': 'OpaqueRef:220b9035-d899-4882-9627-bd6d4adb9e9c', 'device_config': {'device': '/dev/disk/by-id/scsi-36f4ee0807844d90068121b9321711e05', 'SRmaster': 'true'}, 'command': 'sr_update', 'sr_ref': 'OpaqueRef:339f5ed6-e8e6-4421-932f-d64f7ef60013'}
                    May  2 08:32:25 a1 SM: [19387] lock: opening lock file /var/lock/sm/42535e39-4c98-22c6-71eb-303caa3fc97b/sr
                    May  2 08:32:25 a1 SM: [19387] lock: acquired /var/lock/sm/42535e39-4c98-22c6-71eb-303caa3fc97b/sr
                    May  2 08:32:25 a1 SM: [19387] sr_scan {'sr_uuid': '42535e39-4c98-22c6-71eb-303caa3fc97b', 'subtask_of': 'DummyRef:|9e3a3942-fb33-46d2-bb01-5463e16ff9a1|SR.scan', 'args': [], 'host_ref': 'OpaqueRef:a26b109a-5fb2-4644-8e7e-3cc251e43d5c', 'session_ref': 'OpaqueRef:7e309bfe-889c-464e-b6ed-949d9e4adfb5', 'device_config': {'device': '/dev/disk/by-id/scsi-36f4ee0807844d90068121b9321711e05', 'SRmaster': 'true'}, 'command': 'sr_scan', 'sr_ref': 'OpaqueRef:339f5ed6-e8e6-4421-932f-d64f7ef60013'}
                    May  2 08:32:25 a1 SM: [19387] ['/usr/bin/vhd-util', 'scan', '-f', '-m', '/var/run/sr-mount/42535e39-4c98-22c6-71eb-303caa3fc97b/*.vhd']
                    May  2 08:32:25 a1 SM: [19387]   pread SUCCESS
                    May  2 08:32:25 a1 SM: [19387] ['ls', '/var/run/sr-mount/42535e39-4c98-22c6-71eb-303caa3fc97b', '-1', '--color=never']
                    May  2 08:32:25 a1 SM: [19387]   pread SUCCESS
                    May  2 08:32:25 a1 SM: [19387] lock: opening lock file /var/lock/sm/42535e39-4c98-22c6-71eb-303caa3fc97b/running
                    May  2 08:32:25 a1 SM: [19387] lock: tried lock /var/lock/sm/42535e39-4c98-22c6-71eb-303caa3fc97b/running, acquired: True (exists: True)
                    May  2 08:32:25 a1 SM: [19387] lock: released /var/lock/sm/42535e39-4c98-22c6-71eb-303caa3fc97b/running
                    May  2 08:32:25 a1 SM: [19387] Kicking GC
                    May  2 08:32:25 a1 SMGC: [19387] === SR 42535e39-4c98-22c6-71eb-303caa3fc97b: gc ===
                    May  2 08:32:25 a1 SMGC: [19398] Will finish as PID [19399]
                    May  2 08:32:25 a1 SM: [19399] lock: opening lock file /var/lock/sm/42535e39-4c98-22c6-71eb-303caa3fc97b/running
                    May  2 08:32:25 a1 SM: [19399] lock: opening lock file /var/lock/sm/42535e39-4c98-22c6-71eb-303caa3fc97b/gc_active
                    May  2 08:32:25 a1 SMGC: [19387] New PID [19398]
                    May  2 08:32:25 a1 SM: [19387] lock: released /var/lock/sm/42535e39-4c98-22c6-71eb-303caa3fc97b/sr
                    May  2 08:32:25 a1 SM: [19399] lock: opening lock file /var/lock/sm/42535e39-4c98-22c6-71eb-303caa3fc97b/sr
                    May  2 08:32:25 a1 SMGC: [19399] Found 0 cache files
                    May  2 08:32:25 a1 SM: [19399] lock: tried lock /var/lock/sm/42535e39-4c98-22c6-71eb-303caa3fc97b/gc_active, acquired: True (exists: True)
                    May  2 08:32:25 a1 SM: [19399] lock: tried lock /var/lock/sm/42535e39-4c98-22c6-71eb-303caa3fc97b/sr, acquired: True (exists: True)
                    May  2 08:32:25 a1 SM: [19399] ['/usr/bin/vhd-util', 'scan', '-f', '-m', '/var/run/sr-mount/42535e39-4c98-22c6-71eb-303caa3fc97b/*.vhd']
                    May  2 08:32:25 a1 SM: [19399]   pread SUCCESS
                    May  2 08:32:25 a1 SMGC: [19399] SR 4253 ('Local Main Name storage') (0 VDIs in 0 VHD trees): no changes
                    May  2 08:32:25 a1 SM: [19399] lock: released /var/lock/sm/42535e39-4c98-22c6-71eb-303caa3fc97b/sr
                    May  2 08:32:25 a1 SMGC: [19399] No work, exiting
                    May  2 08:32:25 a1 SMGC: [19399] GC process exiting, no work left
                    May  2 08:32:25 a1 SM: [19399] lock: released /var/lock/sm/42535e39-4c98-22c6-71eb-303caa3fc97b/gc_active
                    May  2 08:32:25 a1 SMGC: [19399] In cleanup
                    May  2 08:32:25 a1 SMGC: [19399] SR 4253 ('Local Main Name storage') (0 VDIs in 0 VHD trees): no changes
                    May  2 08:32:25 a1 SM: [19432] lock: opening lock file /var/lock/sm/42535e39-4c98-22c6-71eb-303caa3fc97b/sr
                    May  2 08:32:25 a1 SM: [19432] sr_update {'sr_uuid': '42535e39-4c98-22c6-71eb-303caa3fc97b', 'subtask_of': 'DummyRef:|3207059a-03f4-42a3-bd23-23d226386b08|SR.stat', 'args': [], 'host_ref': 'OpaqueRef:a26b109a-5fb2-4644-8e7e-3cc251e43d5c', 'session_ref': 'OpaqueRef:1a441cae-d761-45b0-a025-c2ca371f0639', 'device_config': {'device': '/dev/disk/by-id/scsi-36f4ee0807844d90068121b9321711e05', 'SRmaster': 'true'}, 'command': 'sr_update', 'sr_ref': 'OpaqueRef:339f5ed6-e8e6-4421-932f-d64f7ef60013'}
                    May  2 08:32:25 a1 SM: [19449] lock: opening lock file /var/lock/sm/42535e39-4c98-22c6-71eb-303caa3fc97b/sr
                    May  2 08:32:25 a1 SM: [19449] sr_update {'sr_uuid': '42535e39-4c98-22c6-71eb-303caa3fc97b', 'subtask_of': 'DummyRef:|6486f365-f80b-427e-afc5-e5c8cc1b4931|SR.stat', 'args': [], 'host_ref': 'OpaqueRef:a26b109a-5fb2-4644-8e7e-3cc251e43d5c', 'session_ref': 'OpaqueRef:1501190d-78ae-4d34-9b7f-bc6fb1103494', 'device_config': {'device': '/dev/disk/by-id/scsi-36f4ee0807844d90068121b9321711e05', 'SRmaster': 'true'}, 'command': 'sr_update', 'sr_ref': 'OpaqueRef:339f5ed6-e8e6-4421-932f-d64f7ef60013'}
                    
                    1 Reply Last reply Reply Quote 0
                    • olivierlambertO Offline
                      olivierlambert Vates 🪐 Co-Founder CEO
                      last edited by

                      Reping @dthenot

                      1 Reply Last reply Reply Quote 0
                      • Y Offline
                        yllar
                        last edited by

                        Reping @dthenot

                        dthenotD 1 Reply Last reply Reply Quote 0
                        • dthenotD Offline
                          dthenot Vates 🪐 XCP-ng Team @yllar
                          last edited by

                          @yllar

                          Sorry, I missed the first ping.

                          May  2 08:31:40 a1 SM: [18985] ['/sbin/vgs', '--readonly', 'VG_XenStorage-07ab18c4-a76f-d1fc-4374-babfe21fd679']
                          May  2 08:32:24 a1 SM: [18985]   pread SUCCESS
                          May  2 08:32:24 a1 SM: [18985] ***** Long LVM call of 'vgs' took 43.6255850792
                          

                          That would explain why it took a long time to create. 43 seconds for a call to vgs.
                          Can you try to do a vgs call yourself on your host?
                          Does it take a long time?

                          This exception is "normal":

                          May  2 08:32:25 a1 SMGC: [19336] *~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*
                          May  2 08:32:25 a1 SMGC: [19336]          ***********************
                          May  2 08:32:25 a1 SMGC: [19336]          *  E X C E P T I O N  *
                          May  2 08:32:25 a1 SMGC: [19336]          ***********************
                          May  2 08:32:25 a1 SMGC: [19336] gc: EXCEPTION <class 'util.SMException'>, SR 42535e39-4c98-22c6-71eb-303caa3fc97b not attached on this host
                          May  2 08:32:25 a1 SMGC: [19336]   File "/opt/xensource/sm/cleanup.py", line 3388, in gc
                          May  2 08:32:25 a1 SMGC: [19336]     _gc(None, srUuid, dryRun)
                          May  2 08:32:25 a1 SMGC: [19336]   File "/opt/xensource/sm/cleanup.py", line 3267, in _gc
                          May  2 08:32:25 a1 SMGC: [19336]     sr = SR.getInstance(srUuid, session)
                          May  2 08:32:25 a1 SMGC: [19336]   File "/opt/xensource/sm/cleanup.py", line 1552, in getInstance
                          May  2 08:32:25 a1 SMGC: [19336]     return FileSR(uuid, xapi, createLock, force)
                          May  2 08:32:25 a1 SMGC: [19336]   File "/opt/xensource/sm/cleanup.py", line 2334, in __init__
                          May  2 08:32:25 a1 SMGC: [19336]     SR.__init__(self, uuid, xapi, createLock, force)
                          May  2 08:32:25 a1 SMGC: [19336]   File "/opt/xensource/sm/cleanup.py", line 1582, in __init__
                          May  2 08:32:25 a1 SMGC: [19336]     raise util.SMException("SR %s not attached on this host" % uuid)
                          May  2 08:32:25 a1 SMGC: [19336]
                          May  2 08:32:25 a1 SMGC: [19336] *~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*
                          May  2 08:32:25 a1 SMGC: [19336] * * * * * SR 42535e39-4c98-22c6-71eb-303caa3fc97b: ERROR
                          May  2 08:32:25 a1 SMGC: [19336]
                          

                          It's the garbage collector trying to run on the SR but it is in the process of attaching.
                          It's weird though because it's the call to sr_attach that launched the GC.
                          Does the GC run normally on this SR on next attempts?

                          Otherwise, I don't see anything worrying the logs you shared.
                          It should be safe to use 🙂

                          1 Reply Last reply Reply Quote 0
                          • Y Offline
                            yllar
                            last edited by

                            Hi, thank you for the response.

                            Currently these commands return data immediately

                            [10:52 a1 ~]# vgs
                              WARNING: Not using device /dev/sda for PV tJUoM9-WUfs-XYZy-vXje-p040-30ui-dder94.
                              WARNING: PV tJUoM9-WUfs-XYZy-vXje-p040-30ui-dder94 prefers device /dev/sda.512 because device is used by LV.
                              VG                                                     #PV #LV #SN Attr   VSize   VFree  
                              VG_XenStorage-07ab18c4-a76f-d1fc-4374-babfe21fd679       1   1   0 wz--n- 405.55g 405.55g
                              XSLocalLargeBlock-42535e39-4c98-22c6-71eb-303caa3fc97b   1   1   0 wz--n-  27.94t      0 
                            [10:40 a1 ~]# /sbin/vgs --readonly VG_XenStorage-07ab18c4-a76f-d1fc-4374-babfe21fd679
                              WARNING: Not using device /dev/sda for PV tJUoM9-WUfs-XYZy-vXje-p040-30ui-dder94.
                              WARNING: PV tJUoM9-WUfs-XYZy-vXje-p040-30ui-dder94 prefers device /dev/sda.512 because device is used by LV.
                              VG                                                 #PV #LV #SN Attr   VSize   VFree  
                              VG_XenStorage-07ab18c4-a76f-d1fc-4374-babfe21fd679   1   1   0 wz--n- 405.55g 405.55g
                            [11:02 a1 log]# /sbin/vgs --readonly XSLocalLargeBlock-42535e39-4c98-22c6-71eb-303caa3fc97b
                              WARNING: Not using device /dev/sda for PV tJUoM9-WUfs-XYZy-vXje-p040-30ui-dder94.
                              WARNING: PV tJUoM9-WUfs-XYZy-vXje-p040-30ui-dder94 prefers device /dev/sda.512 because device is used by LV.
                              VG                                                     #PV #LV #SN Attr   VSize  VFree
                              XSLocalLargeBlock-42535e39-4c98-22c6-71eb-303caa3fc97b   1   1   0 wz--n- 27.94t    0 
                            

                            Maybe this call was somehow in a locked state during the 512 SR create

                            May  2 08:31:40 a1 SM: [18985] ['/sbin/vgs', '--readonly', 'VG_XenStorage-07ab18c4-a76f-d1fc-4374-babfe21fd679']
                            May  2 08:32:24 a1 SM: [18985]   pread SUCCESS
                            May  2 08:32:24 a1 SM: [18985] ***** Long LVM call of 'vgs' took 43.6255850792
                            

                            GC seems to be normal for on next attempts

                            SR:

                            [11:02 a1 log]# xe sr-list uuid=42535e39-4c98-22c6-71eb-303caa3fc97b
                            uuid ( RO)                : 42535e39-4c98-22c6-71eb-303caa3fc97b
                                      name-label ( RW): Local Main Nvme storage
                                name-description ( RW): 
                                            host ( RO): A1 - L 11 - 8.2 - 6740E - 30TB - 512GB - 100G - 2U
                                            type ( RO): largeblock
                                    content-type ( RO): 
                            

                            GC:

                            May 19 10:40:29 a1 SM: [4167] Kicking GC
                            May 19 10:40:29 a1 SMGC: [4167] === SR 42535e39-4c98-22c6-71eb-303caa3fc97b: gc ===
                            May 19 10:40:29 a1 SMGC: [4286] Will finish as PID [4287]
                            May 19 10:40:29 a1 SM: [4287] lock: opening lock file /var/lock/sm/42535e39-4c98-22c6-71eb-303caa3fc97b/running
                            May 19 10:40:29 a1 SM: [4287] lock: opening lock file /var/lock/sm/42535e39-4c98-22c6-71eb-303caa3fc97b/gc_active
                            May 19 10:40:29 a1 SMGC: [4167] New PID [4286]
                            May 19 10:40:29 a1 SM: [4176]   pread SUCCESS
                            May 19 10:40:29 a1 SM: [4176] lock: released /var/lock/sm/.nil/lvm
                            May 19 10:40:29 a1 SM: [4176] lock: released /var/lock/sm/07ab18c4-a76f-d1fc-4374-babfe21fd679/sr
                            May 19 10:40:29 a1 SM: [4176] Entering _checkMetadataVolume
                            May 19 10:40:29 a1 SM: [4176] lock: acquired /var/lock/sm/07ab18c4-a76f-d1fc-4374-babfe21fd679/sr
                            May 19 10:40:29 a1 SM: [4176] sr_scan {'sr_uuid': '07ab18c4-a76f-d1fc-4374-babfe21fd679', 'subtask_of': 'DummyRef:|37b48251-6927-4828-8c6b-38f9b21bc157|SR.scan', 'args': [], 'host_ref': 'OpaqueRef:a26b109a-5fb2-4644-8e7e-3cc251e43d5c', 'session_ref': 'OpaqueRef:970bc095-e2eb-434b-bc9f-1c4e8f58b5c2', 'device_config': {'device': '/dev/disk/by-id/nvme-Dell_BOSS-N1_CN0CMFVPFCP0048500D5-part3', 'SRmaster': 'true'}, 'command': 'sr_scan', 'sr_ref': 'OpaqueRef:c67721e5-74b8-47fe-a180-de50665948c5'}
                            May 19 10:40:29 a1 SM: [4176] LVHDSR.scan for 07ab18c4-a76f-d1fc-4374-babfe21fd679
                            May 19 10:40:29 a1 SM: [4176] lock: acquired /var/lock/sm/.nil/lvm
                            May 19 10:40:29 a1 SM: [4176] ['/sbin/vgs', '--noheadings', '--nosuffix', '--units', 'b', 'VG_XenStorage-07ab18c4-a76f-d1fc-4374-babfe21fd679']
                            May 19 10:40:29 a1 SM: [4167] lock: released /var/lock/sm/42535e39-4c98-22c6-71eb-303caa3fc97b/sr
                            May 19 10:40:29 a1 SM: [4287] lock: opening lock file /var/lock/sm/42535e39-4c98-22c6-71eb-303caa3fc97b/sr
                            May 19 10:40:29 a1 SMGC: [4287] Found 0 cache files
                            May 19 10:40:29 a1 SM: [4287] lock: tried lock /var/lock/sm/42535e39-4c98-22c6-71eb-303caa3fc97b/gc_active, acquired: True (exists: True)
                            May 19 10:40:29 a1 SM: [4287] lock: tried lock /var/lock/sm/42535e39-4c98-22c6-71eb-303caa3fc97b/sr, acquired: True (exists: True)
                            May 19 10:40:29 a1 SM: [4287] ['/usr/bin/vhd-util', 'scan', '-f', '-m', '/var/run/sr-mount/42535e39-4c98-22c6-71eb-303caa3fc97b/*.vhd']
                            May 19 10:40:29 a1 SM: [4287]   pread SUCCESS
                            May 19 10:40:29 a1 SMGC: [4287] SR 4253 ('Local Main Nvme storage') (5 VDIs in 3 VHD trees):
                            May 19 10:40:29 a1 SMGC: [4287]         255478b4(20.000G/2.603G)
                            May 19 10:40:29 a1 SMGC: [4287]         *5b3775a4(20.000G/2.603G)
                            May 19 10:40:29 a1 SMGC: [4287]             fefe7bde(20.000G/881.762M)
                            May 19 10:40:29 a1 SMGC: [4287]             761e5fa7(20.000G/45.500K)
                            May 19 10:40:29 a1 SMGC: [4287]         a3f4b8e5(20.000G/2.675G)
                            May 19 10:40:29 a1 SMGC: [4287]
                            May 19 10:40:29 a1 SM: [4287] lock: released /var/lock/sm/42535e39-4c98-22c6-71eb-303caa3fc97b/sr
                            May 19 10:40:29 a1 SM: [4176]   pread SUCCESS
                            May 19 10:40:29 a1 SM: [4176] lock: released /var/lock/sm/.nil/lvm
                            May 19 10:40:29 a1 SMGC: [4287] Got sm-config for *5b3775a4(20.000G/2.603G): {'vhd-blocks': 'eJz7///3DgYgaGBABZTyiQX/oYBc/bjsJ9Y8mNXUcgepAGTnHwb624uwvwHujgHy/z8Ghg9kpx8K7MUK6OUOXPbTyj3o5qDbQC93QNI7kn0HKDWRsYESzQC3eq69'}
                            May 19 10:40:29 a1 SMGC: [4287] No work, exiting
                            May 19 10:40:29 a1 SMGC: [4287] GC process exiting, no work left
                            
                            dthenotD 1 Reply Last reply Reply Quote 0
                            • dthenotD Offline
                              dthenot Vates 🪐 XCP-ng Team @yllar
                              last edited by

                              @yllar Maybe it was because of the loopdevice not being completely created indeed.
                              No error for this GC run.

                              Everything should be ok then 🙂

                              1 Reply Last reply Reply Quote 0
                              • First post
                                Last post