<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[LargeBlockSR for 4KiB blocksize disks]]></title><description><![CDATA[<p dir="auto">Hello,</p>
<p dir="auto">As some of you may know, there is currently a problem with disks with blocksize of 4KiB not being compatible to be a SR disk.<br />
It is an error with the <code>vhd-util</code> utilities that is not easily fixed.<br />
As such, we quickly developed a SMAPI driver using <code>losetup</code> ability to emulate another sector size to be able to workaround the problem for the moment.</p>
<p dir="auto">The real solution will involve SMAPIv3, which the first driver is available to test: <a href="https://xcp-ng.org/blog/2024/04/19/first-smapiv3-driver-is-available-in-preview/" target="_blank" rel="noopener noreferrer nofollow ugc">https://xcp-ng.org/blog/2024/04/19/first-smapiv3-driver-is-available-in-preview/</a></p>
<p dir="auto">To go back to the LargeBlock driver, it is available in 8.3 in sm 3.0.12-12.2.</p>
<p dir="auto">To set it up, it is as simple as creating a EXT SR with <code>xe</code> CLI but with <code>type=largeblock</code>.</p>
<pre><code>xe sr-create host-uuid=&lt;host UUID&gt; type=largeblock name-label="LargeBlock SR" device-config:device=/dev/nvme0n1
</code></pre>
<p dir="auto">It does not support using multiple devices because of quirks with LVM and the EXT SR driver.</p>
<p dir="auto">It automatically creates a loop device with a sector size of 512b on top of the 4KiB device and then creates a EXT SR on top of this emulated device.</p>
<p dir="auto">This driver is a workaround, we have automated tests but they can't catch all things.<br />
If you have any feedbacks or problems, don't hesitate to share here <img src="https://xcp-ng.org/forum/assets/plugins/nodebb-plugin-emoji/emoji/android/1f642.png?v=e4fb0e60dbd" class="not-responsive emoji emoji-android emoji--slightly_smiling_face" style="height:23px;width:auto;vertical-align:middle" title=":)" alt="🙂" /></p>
]]></description><link>https://xcp-ng.org/forum/topic/8901/largeblocksr-for-4kib-blocksize-disks</link><generator>RSS for Node</generator><lastBuildDate>Sat, 14 Mar 2026 05:30:52 GMT</lastBuildDate><atom:link href="https://xcp-ng.org/forum/topic/8901.rss" rel="self" type="application/rss+xml"/><pubDate>Fri, 26 Apr 2024 13:23:08 GMT</pubDate><ttl>60</ttl><item><title><![CDATA[Reply to LargeBlockSR for 4KiB blocksize disks on Mon, 19 May 2025 09:20:32 GMT]]></title><description><![CDATA[<p dir="auto"><a class="plugin-mentions-user plugin-mentions-a" href="/forum/user/yllar" aria-label="Profile: yllar">@<bdi>yllar</bdi></a> Maybe it was because of the loopdevice not being completely created indeed.<br />
No error for this GC run.</p>
<p dir="auto">Everything should be ok then <img src="https://xcp-ng.org/forum/assets/plugins/nodebb-plugin-emoji/emoji/android/1f642.png?v=e4fb0e60dbd" class="not-responsive emoji emoji-android emoji--slightly_smiling_face" style="height:23px;width:auto;vertical-align:middle" title=":)" alt="🙂" /></p>
]]></description><link>https://xcp-ng.org/forum/post/93052</link><guid isPermaLink="true">https://xcp-ng.org/forum/post/93052</guid><dc:creator><![CDATA[dthenot]]></dc:creator><pubDate>Mon, 19 May 2025 09:20:32 GMT</pubDate></item><item><title><![CDATA[Reply to LargeBlockSR for 4KiB blocksize disks on Mon, 19 May 2025 09:16:05 GMT]]></title><description><![CDATA[<p dir="auto">Hi, thank you for the response.</p>
<p dir="auto">Currently these commands return data immediately</p>
<pre><code>[10:52 a1 ~]# vgs
  WARNING: Not using device /dev/sda for PV tJUoM9-WUfs-XYZy-vXje-p040-30ui-dder94.
  WARNING: PV tJUoM9-WUfs-XYZy-vXje-p040-30ui-dder94 prefers device /dev/sda.512 because device is used by LV.
  VG                                                     #PV #LV #SN Attr   VSize   VFree  
  VG_XenStorage-07ab18c4-a76f-d1fc-4374-babfe21fd679       1   1   0 wz--n- 405.55g 405.55g
  XSLocalLargeBlock-42535e39-4c98-22c6-71eb-303caa3fc97b   1   1   0 wz--n-  27.94t      0 
[10:40 a1 ~]# /sbin/vgs --readonly VG_XenStorage-07ab18c4-a76f-d1fc-4374-babfe21fd679
  WARNING: Not using device /dev/sda for PV tJUoM9-WUfs-XYZy-vXje-p040-30ui-dder94.
  WARNING: PV tJUoM9-WUfs-XYZy-vXje-p040-30ui-dder94 prefers device /dev/sda.512 because device is used by LV.
  VG                                                 #PV #LV #SN Attr   VSize   VFree  
  VG_XenStorage-07ab18c4-a76f-d1fc-4374-babfe21fd679   1   1   0 wz--n- 405.55g 405.55g
[11:02 a1 log]# /sbin/vgs --readonly XSLocalLargeBlock-42535e39-4c98-22c6-71eb-303caa3fc97b
  WARNING: Not using device /dev/sda for PV tJUoM9-WUfs-XYZy-vXje-p040-30ui-dder94.
  WARNING: PV tJUoM9-WUfs-XYZy-vXje-p040-30ui-dder94 prefers device /dev/sda.512 because device is used by LV.
  VG                                                     #PV #LV #SN Attr   VSize  VFree
  XSLocalLargeBlock-42535e39-4c98-22c6-71eb-303caa3fc97b   1   1   0 wz--n- 27.94t    0 
</code></pre>
<p dir="auto">Maybe this call was somehow in a locked state during the 512 SR create</p>
<pre><code>May  2 08:31:40 a1 SM: [18985] ['/sbin/vgs', '--readonly', 'VG_XenStorage-07ab18c4-a76f-d1fc-4374-babfe21fd679']
May  2 08:32:24 a1 SM: [18985]   pread SUCCESS
May  2 08:32:24 a1 SM: [18985] ***** Long LVM call of 'vgs' took 43.6255850792
</code></pre>
<p dir="auto">GC seems to be normal for on next attempts</p>
<p dir="auto">SR:</p>
<pre><code>[11:02 a1 log]# xe sr-list uuid=42535e39-4c98-22c6-71eb-303caa3fc97b
uuid ( RO)                : 42535e39-4c98-22c6-71eb-303caa3fc97b
          name-label ( RW): Local Main Nvme storage
    name-description ( RW): 
                host ( RO): A1 - L 11 - 8.2 - 6740E - 30TB - 512GB - 100G - 2U
                type ( RO): largeblock
        content-type ( RO): 
</code></pre>
<p dir="auto">GC:</p>
<pre><code>May 19 10:40:29 a1 SM: [4167] Kicking GC
May 19 10:40:29 a1 SMGC: [4167] === SR 42535e39-4c98-22c6-71eb-303caa3fc97b: gc ===
May 19 10:40:29 a1 SMGC: [4286] Will finish as PID [4287]
May 19 10:40:29 a1 SM: [4287] lock: opening lock file /var/lock/sm/42535e39-4c98-22c6-71eb-303caa3fc97b/running
May 19 10:40:29 a1 SM: [4287] lock: opening lock file /var/lock/sm/42535e39-4c98-22c6-71eb-303caa3fc97b/gc_active
May 19 10:40:29 a1 SMGC: [4167] New PID [4286]
May 19 10:40:29 a1 SM: [4176]   pread SUCCESS
May 19 10:40:29 a1 SM: [4176] lock: released /var/lock/sm/.nil/lvm
May 19 10:40:29 a1 SM: [4176] lock: released /var/lock/sm/07ab18c4-a76f-d1fc-4374-babfe21fd679/sr
May 19 10:40:29 a1 SM: [4176] Entering _checkMetadataVolume
May 19 10:40:29 a1 SM: [4176] lock: acquired /var/lock/sm/07ab18c4-a76f-d1fc-4374-babfe21fd679/sr
May 19 10:40:29 a1 SM: [4176] sr_scan {'sr_uuid': '07ab18c4-a76f-d1fc-4374-babfe21fd679', 'subtask_of': 'DummyRef:|37b48251-6927-4828-8c6b-38f9b21bc157|SR.scan', 'args': [], 'host_ref': 'OpaqueRef:a26b109a-5fb2-4644-8e7e-3cc251e43d5c', 'session_ref': 'OpaqueRef:970bc095-e2eb-434b-bc9f-1c4e8f58b5c2', 'device_config': {'device': '/dev/disk/by-id/nvme-Dell_BOSS-N1_CN0CMFVPFCP0048500D5-part3', 'SRmaster': 'true'}, 'command': 'sr_scan', 'sr_ref': 'OpaqueRef:c67721e5-74b8-47fe-a180-de50665948c5'}
May 19 10:40:29 a1 SM: [4176] LVHDSR.scan for 07ab18c4-a76f-d1fc-4374-babfe21fd679
May 19 10:40:29 a1 SM: [4176] lock: acquired /var/lock/sm/.nil/lvm
May 19 10:40:29 a1 SM: [4176] ['/sbin/vgs', '--noheadings', '--nosuffix', '--units', 'b', 'VG_XenStorage-07ab18c4-a76f-d1fc-4374-babfe21fd679']
May 19 10:40:29 a1 SM: [4167] lock: released /var/lock/sm/42535e39-4c98-22c6-71eb-303caa3fc97b/sr
May 19 10:40:29 a1 SM: [4287] lock: opening lock file /var/lock/sm/42535e39-4c98-22c6-71eb-303caa3fc97b/sr
May 19 10:40:29 a1 SMGC: [4287] Found 0 cache files
May 19 10:40:29 a1 SM: [4287] lock: tried lock /var/lock/sm/42535e39-4c98-22c6-71eb-303caa3fc97b/gc_active, acquired: True (exists: True)
May 19 10:40:29 a1 SM: [4287] lock: tried lock /var/lock/sm/42535e39-4c98-22c6-71eb-303caa3fc97b/sr, acquired: True (exists: True)
May 19 10:40:29 a1 SM: [4287] ['/usr/bin/vhd-util', 'scan', '-f', '-m', '/var/run/sr-mount/42535e39-4c98-22c6-71eb-303caa3fc97b/*.vhd']
May 19 10:40:29 a1 SM: [4287]   pread SUCCESS
May 19 10:40:29 a1 SMGC: [4287] SR 4253 ('Local Main Nvme storage') (5 VDIs in 3 VHD trees):
May 19 10:40:29 a1 SMGC: [4287]         255478b4(20.000G/2.603G)
May 19 10:40:29 a1 SMGC: [4287]         *5b3775a4(20.000G/2.603G)
May 19 10:40:29 a1 SMGC: [4287]             fefe7bde(20.000G/881.762M)
May 19 10:40:29 a1 SMGC: [4287]             761e5fa7(20.000G/45.500K)
May 19 10:40:29 a1 SMGC: [4287]         a3f4b8e5(20.000G/2.675G)
May 19 10:40:29 a1 SMGC: [4287]
May 19 10:40:29 a1 SM: [4287] lock: released /var/lock/sm/42535e39-4c98-22c6-71eb-303caa3fc97b/sr
May 19 10:40:29 a1 SM: [4176]   pread SUCCESS
May 19 10:40:29 a1 SM: [4176] lock: released /var/lock/sm/.nil/lvm
May 19 10:40:29 a1 SMGC: [4287] Got sm-config for *5b3775a4(20.000G/2.603G): {'vhd-blocks': 'eJz7///3DgYgaGBABZTyiQX/oYBc/bjsJ9Y8mNXUcgepAGTnHwb624uwvwHujgHy/z8Ghg9kpx8K7MUK6OUOXPbTyj3o5qDbQC93QNI7kn0HKDWRsYESzQC3eq69'}
May 19 10:40:29 a1 SMGC: [4287] No work, exiting
May 19 10:40:29 a1 SMGC: [4287] GC process exiting, no work left
</code></pre>
]]></description><link>https://xcp-ng.org/forum/post/93050</link><guid isPermaLink="true">https://xcp-ng.org/forum/post/93050</guid><dc:creator><![CDATA[yllar]]></dc:creator><pubDate>Mon, 19 May 2025 09:16:05 GMT</pubDate></item><item><title><![CDATA[Reply to LargeBlockSR for 4KiB blocksize disks on Mon, 19 May 2025 08:47:52 GMT]]></title><description><![CDATA[<p dir="auto"><a class="plugin-mentions-user plugin-mentions-a" href="/forum/user/yllar" aria-label="Profile: yllar">@<bdi>yllar</bdi></a></p>
<p dir="auto">Sorry, I missed the first ping.</p>
<pre><code>May  2 08:31:40 a1 SM: [18985] ['/sbin/vgs', '--readonly', 'VG_XenStorage-07ab18c4-a76f-d1fc-4374-babfe21fd679']
May  2 08:32:24 a1 SM: [18985]   pread SUCCESS
May  2 08:32:24 a1 SM: [18985] ***** Long LVM call of 'vgs' took 43.6255850792
</code></pre>
<p dir="auto">That would explain why it took a long time to create. 43 seconds for a call to <code>vgs</code>.<br />
Can you try to do a <code>vgs</code> call yourself on your host?<br />
Does it take a long time?</p>
<p dir="auto">This exception is "normal":</p>
<pre><code>May  2 08:32:25 a1 SMGC: [19336] *~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*
May  2 08:32:25 a1 SMGC: [19336]          ***********************
May  2 08:32:25 a1 SMGC: [19336]          *  E X C E P T I O N  *
May  2 08:32:25 a1 SMGC: [19336]          ***********************
May  2 08:32:25 a1 SMGC: [19336] gc: EXCEPTION &lt;class 'util.SMException'&gt;, SR 42535e39-4c98-22c6-71eb-303caa3fc97b not attached on this host
May  2 08:32:25 a1 SMGC: [19336]   File "/opt/xensource/sm/cleanup.py", line 3388, in gc
May  2 08:32:25 a1 SMGC: [19336]     _gc(None, srUuid, dryRun)
May  2 08:32:25 a1 SMGC: [19336]   File "/opt/xensource/sm/cleanup.py", line 3267, in _gc
May  2 08:32:25 a1 SMGC: [19336]     sr = SR.getInstance(srUuid, session)
May  2 08:32:25 a1 SMGC: [19336]   File "/opt/xensource/sm/cleanup.py", line 1552, in getInstance
May  2 08:32:25 a1 SMGC: [19336]     return FileSR(uuid, xapi, createLock, force)
May  2 08:32:25 a1 SMGC: [19336]   File "/opt/xensource/sm/cleanup.py", line 2334, in __init__
May  2 08:32:25 a1 SMGC: [19336]     SR.__init__(self, uuid, xapi, createLock, force)
May  2 08:32:25 a1 SMGC: [19336]   File "/opt/xensource/sm/cleanup.py", line 1582, in __init__
May  2 08:32:25 a1 SMGC: [19336]     raise util.SMException("SR %s not attached on this host" % uuid)
May  2 08:32:25 a1 SMGC: [19336]
May  2 08:32:25 a1 SMGC: [19336] *~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*
May  2 08:32:25 a1 SMGC: [19336] * * * * * SR 42535e39-4c98-22c6-71eb-303caa3fc97b: ERROR
May  2 08:32:25 a1 SMGC: [19336]
</code></pre>
<p dir="auto">It's the garbage collector trying to run on the SR but it is in the process of attaching.<br />
It's weird though because it's the call to <code>sr_attach</code> that launched the GC.<br />
Does the GC run normally on this SR on next attempts?</p>
<p dir="auto">Otherwise, I don't see anything worrying the logs you shared.<br />
It should be safe to use <img src="https://xcp-ng.org/forum/assets/plugins/nodebb-plugin-emoji/emoji/android/1f642.png?v=e4fb0e60dbd" class="not-responsive emoji emoji-android emoji--slightly_smiling_face" style="height:23px;width:auto;vertical-align:middle" title=":)" alt="🙂" /></p>
]]></description><link>https://xcp-ng.org/forum/post/93048</link><guid isPermaLink="true">https://xcp-ng.org/forum/post/93048</guid><dc:creator><![CDATA[dthenot]]></dc:creator><pubDate>Mon, 19 May 2025 08:47:52 GMT</pubDate></item><item><title><![CDATA[Reply to LargeBlockSR for 4KiB blocksize disks on Fri, 16 May 2025 10:16:48 GMT]]></title><description><![CDATA[<p dir="auto">Reping <a class="plugin-mentions-user plugin-mentions-a" href="/forum/user/dthenot" aria-label="Profile: dthenot">@<bdi>dthenot</bdi></a></p>
]]></description><link>https://xcp-ng.org/forum/post/92986</link><guid isPermaLink="true">https://xcp-ng.org/forum/post/92986</guid><dc:creator><![CDATA[yllar]]></dc:creator><pubDate>Fri, 16 May 2025 10:16:48 GMT</pubDate></item><item><title><![CDATA[Reply to LargeBlockSR for 4KiB blocksize disks on Fri, 02 May 2025 12:24:41 GMT]]></title><description><![CDATA[<p dir="auto">Reping <a class="plugin-mentions-user plugin-mentions-a" href="/forum/user/dthenot" aria-label="Profile: dthenot">@<bdi>dthenot</bdi></a></p>
]]></description><link>https://xcp-ng.org/forum/post/92562</link><guid isPermaLink="true">https://xcp-ng.org/forum/post/92562</guid><dc:creator><![CDATA[olivierlambert]]></dc:creator><pubDate>Fri, 02 May 2025 12:24:41 GMT</pubDate></item><item><title><![CDATA[Reply to LargeBlockSR for 4KiB blocksize disks on Fri, 02 May 2025 06:57:58 GMT]]></title><description><![CDATA[<p dir="auto"><a class="plugin-mentions-user plugin-mentions-a" href="/forum/user/dthenot" aria-label="Profile: dthenot">@<bdi>dthenot</bdi></a></p>
<p dir="auto">NVMe drives attached to the PERC H965i are identified as SCSI disks in the operating system<br />
OS exposes a 4kn nvme device on <em>/dev/disk/by-id/scsi-36f4ee0807844d90068121b9321711e05</em></p>
<p dir="auto">Below is a log of creating a type=largeblock SR on latest XCP-ng 8.2</p>
<p dir="auto">It takes about 10 minutes and some errors but it does successfully create the SR and we are able to use it.<br />
Are all these errors expected and we can trust it's working normally?</p>
<p dir="auto">Console:</p>
<pre><code># xe sr-create host-uuid=383399d1-b304-48db-ad4b-bc8fe8b56f89 type=largeblock name-label="Local Main Name storage" device-config:device=/dev/disk/by-id/scsi-36f4ee0807844d90068121b9321711e05
42535e39-4c98-22c6-71eb-303caa3fc97b
</code></pre>
<p dir="auto">SM.log</p>
<pre><code>May  2 08:22:29 a1 SM: [15928] lock: opening lock file /var/lock/sm/42535e39-4c98-22c6-71eb-303caa3fc97b/sr
May  2 08:22:29 a1 SM: [15928] lock: acquired /var/lock/sm/42535e39-4c98-22c6-71eb-303caa3fc97b/sr
May  2 08:22:29 a1 SM: [15928] sr_create {'sr_uuid': '42535e39-4c98-22c6-71eb-303caa3fc97b', 'subtask_of': 'DummyRef:|2d81471d-e02c-4c9c-8273-51527e849c1d|SR.create', 'args': ['0'], 'host_ref': 'OpaqueRef:a26b109a-5fb2-4644-8e7e-3cc251e43d5c', 'session_ref': 'OpaqueRef:61307cea-d759-4f9a-9052-e44c0c574b1f', 'device_config': {'device': '/dev/disk/by-id/scsi-36f4ee0807844d90068121b9321711e05', 'SRmaster': 'true'}, 'command': 'sr_create', 'sr_ref': 'OpaqueRef:339f5ed6-e8e6-4421-932f-d64f7ef60013'}
May  2 08:22:29 a1 SM: [15928] ['blockdev', '--getss', '/dev/disk/by-id/scsi-36f4ee0807844d90068121b9321711e05']
May  2 08:22:29 a1 SM: [15928]   pread SUCCESS
May  2 08:22:29 a1 SM: [15928] ['losetup', '-f', '-v', '--show', '--sector-size', '512', '/dev/disk/by-id/scsi-36f4ee0807844d90068121b9321711e05']
May  2 08:22:29 a1 SM: [15928]   pread SUCCESS
May  2 08:22:29 a1 SM: [15928] util.test_scsiserial: Not a serial device: /dev/loop0
May  2 08:22:29 a1 SM: [15928] lock: opening lock file /var/lock/sm/.nil/lvm
May  2 08:22:29 a1 SM: [15928] ['/sbin/vgs', '--readonly', 'XSLocalLargeBlock-42535e39-4c98-22c6-71eb-303caa3fc97b']
May  2 08:22:29 a1 SM: [15928] FAILED in util.pread: (rc 5) stdout: '', stderr: '  WARNING: Not using device /dev/sda for PV HMeziz-gDTa-cNLl-1B2E-ebbh-e5ki-RIhK5T.
May  2 08:22:29 a1 SM: [15928]   WARNING: PV HMeziz-gDTa-cNLl-1B2E-ebbh-e5ki-RIhK5T prefers device /dev/loop0 because device was seen first.
May  2 08:22:29 a1 SM: [15928]   Volume group "XSLocalLargeBlock-42535e39-4c98-22c6-71eb-303caa3fc97b" not found
May  2 08:22:29 a1 SM: [15928]   Cannot process volume group XSLocalLargeBlock-42535e39-4c98-22c6-71eb-303caa3fc97b
May  2 08:22:29 a1 SM: [15928] '
May  2 08:22:29 a1 SM: [15928] ['/bin/dd', 'if=/dev/zero', 'of=/dev/disk/by-id/scsi-36f4ee0807844d90068121b9321711e05.512', 'bs=1M', 'count=10', 'oflag=direct']
May  2 08:22:29 a1 SM: [15928]   pread SUCCESS
May  2 08:22:29 a1 SM: [15928] lock: acquired /var/lock/sm/.nil/lvm
May  2 08:22:29 a1 SM: [15928] ['/sbin/vgcreate', '--metadatasize', '10M', 'XSLocalLargeBlock-42535e39-4c98-22c6-71eb-303caa3fc97b', '/dev/disk/by-id/scsi-36f4ee0807844d90068121b9321711e05.512']
May  2 08:22:29 a1 SM: [15928]   pread SUCCESS
May  2 08:22:29 a1 SM: [15928] lock: released /var/lock/sm/.nil/lvm
May  2 08:22:29 a1 SM: [15928] lock: acquired /var/lock/sm/.nil/lvm
May  2 08:22:29 a1 SM: [15928] ['/sbin/vgchange', '-an', 'XSLocalLargeBlock-42535e39-4c98-22c6-71eb-303caa3fc97b']
May  2 08:22:29 a1 SM: [15928]   pread SUCCESS
May  2 08:22:29 a1 SM: [15928] lock: released /var/lock/sm/.nil/lvm
May  2 08:22:29 a1 SM: [15928] lock: acquired /var/lock/sm/.nil/lvm
May  2 08:22:29 a1 SM: [15928] ['/sbin/lvdisplay', '/dev/XSLocalLargeBlock-42535e39-4c98-22c6-71eb-303caa3fc97b/42535e39-4c98-22c6-71eb-303caa3fc97b']
May  2 08:22:29 a1 SM: [15928] FAILED in util.pread: (rc 5) stdout: '', stderr: '  WARNING: Not using device /dev/sda for PV tJUoM9-WUfs-XYZy-vXje-p040-30ui-dder94.
May  2 08:22:29 a1 SM: [15928]   WARNING: PV tJUoM9-WUfs-XYZy-vXje-p040-30ui-dder94 prefers device /dev/loop0 because device was seen first.
May  2 08:22:29 a1 SM: [15928]   Failed to find logical volume "XSLocalLargeBlock-42535e39-4c98-22c6-71eb-303caa3fc97b/42535e39-4c98-22c6-71eb-303caa3fc97b"
May  2 08:22:29 a1 SM: [15928] '
May  2 08:22:29 a1 SM: [15928] lock: released /var/lock/sm/.nil/lvm
May  2 08:22:29 a1 SM: [15928] lock: acquired /var/lock/sm/.nil/lvm
May  2 08:22:29 a1 SM: [15928] ['/sbin/vgs', '--noheadings', '--nosuffix', '--units', 'b', 'XSLocalLargeBlock-42535e39-4c98-22c6-71eb-303caa3fc97b']
May  2 08:22:29 a1 SM: [15928]   pread SUCCESS
May  2 08:22:29 a1 SM: [15928] lock: released /var/lock/sm/.nil/lvm
May  2 08:22:29 a1 SM: [15928] ['lvcreate', '-n', '42535e39-4c98-22c6-71eb-303caa3fc97b', '-L', '29302004', 'XSLocalLargeBlock-42535e39-4c98-22c6-71eb-303caa3fc97b']
May  2 08:22:29 a1 SM: [15928]   pread SUCCESS
May  2 08:22:29 a1 SM: [15928] ['lvchange', '-ay', '/dev/XSLocalLargeBlock-42535e39-4c98-22c6-71eb-303caa3fc97b/42535e39-4c98-22c6-71eb-303caa3fc97b']
May  2 08:22:29 a1 SM: [15928]   pread SUCCESS
May  2 08:22:29 a1 SM: [15928] ['mkfs.ext4', '-F', '/dev/XSLocalLargeBlock-42535e39-4c98-22c6-71eb-303caa3fc97b/42535e39-4c98-22c6-71eb-303caa3fc97b']


May  2 08:31:40 a1 SM: [18985] lock: opening lock file /var/lock/sm/07ab18c4-a76f-d1fc-4374-babfe21fd679/sr
May  2 08:31:40 a1 SM: [18985] LVMCache created for VG_XenStorage-07ab18c4-a76f-d1fc-4374-babfe21fd679
May  2 08:31:40 a1 SM: [18985] lock: opening lock file /var/lock/sm/.nil/lvm
May  2 08:31:40 a1 SM: [18985] ['/sbin/vgs', '--readonly', 'VG_XenStorage-07ab18c4-a76f-d1fc-4374-babfe21fd679']
May  2 08:32:24 a1 SM: [18985]   pread SUCCESS
May  2 08:32:24 a1 SM: [15928]   pread SUCCESS
May  2 08:32:24 a1 SM: [15928] ['/usr/lib/udev/scsi_id', '-g', '--device', '/dev/disk/by-id/scsi-36f4ee0807844d90068121b9321711e05.512']
May  2 08:32:24 a1 SM: [18985] ***** Long LVM call of 'vgs' took 43.6255850792
May  2 08:32:24 a1 SM: [18985] Entering _checkMetadataVolume
May  2 08:32:24 a1 SM: [18985] LVMCache: will initialize now
May  2 08:32:24 a1 SM: [18985] LVMCache: refreshing
May  2 08:32:24 a1 SM: [18985] lock: acquired /var/lock/sm/.nil/lvm
May  2 08:32:24 a1 SM: [18985] ['/sbin/lvs', '--noheadings', '--units', 'b', '-o', '+lv_tags', '/dev/VG_XenStorage-07ab18c4-a76f-d1fc-4374-babfe21fd679']
May  2 08:32:24 a1 SM: [15928] FAILED in util.pread: (rc 1) stdout: '', stderr: ''
May  2 08:32:24 a1 SM: [15928] ['losetup', '--list']
May  2 08:32:24 a1 SM: [15928]   pread SUCCESS
May  2 08:32:24 a1 SM: [15928] ['losetup', '-d', '/dev/loop0']
May  2 08:32:24 a1 SM: [18985]   pread SUCCESS
May  2 08:32:24 a1 SM: [18985] lock: released /var/lock/sm/.nil/lvm
May  2 08:32:24 a1 SM: [15928]   pread SUCCESS
May  2 08:32:24 a1 SM: [15928] lock: released /var/lock/sm/42535e39-4c98-22c6-71eb-303caa3fc97b/sr
May  2 08:32:24 a1 SM: [19294] lock: opening lock file /var/lock/sm/42535e39-4c98-22c6-71eb-303caa3fc97b/sr
May  2 08:32:24 a1 SM: [19294] lock: acquired /var/lock/sm/42535e39-4c98-22c6-71eb-303caa3fc97b/sr
May  2 08:32:24 a1 SM: [19294] sr_attach {'sr_uuid': '42535e39-4c98-22c6-71eb-303caa3fc97b', 'subtask_of': 'DummyRef:|ab5bca7f-6597-4874-948a-b4c8a0b4283e|SR.attach', 'args': [], 'host_ref': 'OpaqueRef:a26b109a-5fb2-4644-8e7e-3cc251e43d5c', 'session_ref': 'OpaqueRef:5abf6e38-a9b0-44ff-a095-e95786bb30f7', 'device_config': {'device': '/dev/disk/by-id/scsi-36f4ee0807844d90068121b9321711e05', 'SRmaster': 'true'}, 'command': 'sr_attach', 'sr_ref': 'OpaqueRef:339f5ed6-e8e6-4421-932f-d64f7ef60013'}
May  2 08:32:24 a1 SMGC: [19294] === SR 42535e39-4c98-22c6-71eb-303caa3fc97b: abort ===
May  2 08:32:24 a1 SM: [19294] lock: opening lock file /var/lock/sm/42535e39-4c98-22c6-71eb-303caa3fc97b/running
May  2 08:32:24 a1 SM: [19294] lock: opening lock file /var/lock/sm/42535e39-4c98-22c6-71eb-303caa3fc97b/gc_active
May  2 08:32:24 a1 SM: [19294] lock: tried lock /var/lock/sm/42535e39-4c98-22c6-71eb-303caa3fc97b/gc_active, acquired: True (exists: True)
May  2 08:32:24 a1 SMGC: [19294] abort: releasing the process lock
May  2 08:32:24 a1 SM: [19294] lock: released /var/lock/sm/42535e39-4c98-22c6-71eb-303caa3fc97b/gc_active
May  2 08:32:24 a1 SM: [19294] lock: acquired /var/lock/sm/42535e39-4c98-22c6-71eb-303caa3fc97b/running
May  2 08:32:24 a1 SM: [19294] RESET for SR 42535e39-4c98-22c6-71eb-303caa3fc97b (master: True)
May  2 08:32:24 a1 SM: [19294] lock: released /var/lock/sm/42535e39-4c98-22c6-71eb-303caa3fc97b/running
May  2 08:32:24 a1 SM: [19294] set_dirty 'OpaqueRef:339f5ed6-e8e6-4421-932f-d64f7ef60013' succeeded
May  2 08:32:24 a1 SM: [19294] ['vgs', '--noheadings', '-o', 'vg_name,devices', 'XSLocalLargeBlock-42535e39-4c98-22c6-71eb-303caa3fc97b']
May  2 08:32:24 a1 SM: [19294]   pread SUCCESS
May  2 08:32:24 a1 SM: [19294] ['losetup', '--list']
May  2 08:32:24 a1 SM: [19294]   pread SUCCESS
May  2 08:32:24 a1 SM: [19294] ['losetup', '-f', '-v', '--show', '--sector-size', '512', '/dev/sda']
May  2 08:32:24 a1 SM: [19294]   pread SUCCESS
May  2 08:32:24 a1 SM: [19294] ['vgs', '--noheadings', '-o', 'vg_name,devices', 'XSLocalLargeBlock-42535e39-4c98-22c6-71eb-303caa3fc97b']
May  2 08:32:24 a1 SM: [19294]   pread SUCCESS
May  2 08:32:24 a1 SM: [19294] ['lvchange', '-ay', '/dev/XSLocalLargeBlock-42535e39-4c98-22c6-71eb-303caa3fc97b/42535e39-4c98-22c6-71eb-303caa3fc97b']
May  2 08:32:24 a1 SM: [19294]   pread SUCCESS
May  2 08:32:24 a1 SM: [19294] ['fsck', '-a', '/dev/XSLocalLargeBlock-42535e39-4c98-22c6-71eb-303caa3fc97b/42535e39-4c98-22c6-71eb-303caa3fc97b']
May  2 08:32:24 a1 SM: [19294]   pread SUCCESS
May  2 08:32:24 a1 SM: [19294] ['mount', '/dev/XSLocalLargeBlock-42535e39-4c98-22c6-71eb-303caa3fc97b/42535e39-4c98-22c6-71eb-303caa3fc97b', '/var/run/sr-mount/42535e39-4c98-22c6-71eb-303caa3fc97b']
May  2 08:32:24 a1 SM: [19294]   pread SUCCESS
May  2 08:32:24 a1 SM: [19294] ['/usr/lib/udev/scsi_id', '-g', '--device', '/dev/sda.512']
May  2 08:32:24 a1 SM: [19294] FAILED in util.pread: (rc 1) stdout: '', stderr: ''
May  2 08:32:24 a1 SM: [19294] Dom0 disks: ['/dev/nvme0n1p']
May  2 08:32:24 a1 SM: [19294] Block scheduler: /dev/sda.512 (/dev/loop) wants noop
May  2 08:32:24 a1 SM: [19294] no path /sys/block/loop/queue/scheduler
May  2 08:32:24 a1 SM: [19294] ['/usr/bin/vhd-util', 'scan', '-f', '-m', '/var/run/sr-mount/42535e39-4c98-22c6-71eb-303caa3fc97b/*.vhd']
May  2 08:32:24 a1 SM: [19294]   pread SUCCESS
May  2 08:32:24 a1 SM: [19294] ['ls', '/var/run/sr-mount/42535e39-4c98-22c6-71eb-303caa3fc97b', '-1', '--color=never']
May  2 08:32:24 a1 SM: [19294]   pread SUCCESS
May  2 08:32:24 a1 SM: [19294] lock: tried lock /var/lock/sm/42535e39-4c98-22c6-71eb-303caa3fc97b/running, acquired: True (exists: True)
May  2 08:32:24 a1 SM: [19294] lock: released /var/lock/sm/42535e39-4c98-22c6-71eb-303caa3fc97b/running
May  2 08:32:24 a1 SM: [19294] Kicking GC
May  2 08:32:24 a1 SMGC: [19294] === SR 42535e39-4c98-22c6-71eb-303caa3fc97b: gc ===
May  2 08:32:24 a1 SMGC: [19335] Will finish as PID [19336]
May  2 08:32:24 a1 SMGC: [19294] New PID [19335]
May  2 08:32:24 a1 SM: [19294] lock: released /var/lock/sm/42535e39-4c98-22c6-71eb-303caa3fc97b/sr
May  2 08:32:25 a1 SM: [19336] lock: opening lock file /var/lock/sm/42535e39-4c98-22c6-71eb-303caa3fc97b/sr
May  2 08:32:25 a1 SMGC: [19336] *~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*
May  2 08:32:25 a1 SMGC: [19336]          ***********************
May  2 08:32:25 a1 SMGC: [19336]          *  E X C E P T I O N  *
May  2 08:32:25 a1 SMGC: [19336]          ***********************
May  2 08:32:25 a1 SMGC: [19336] gc: EXCEPTION &lt;class 'util.SMException'&gt;, SR 42535e39-4c98-22c6-71eb-303caa3fc97b not attached on this host
May  2 08:32:25 a1 SMGC: [19336]   File "/opt/xensource/sm/cleanup.py", line 3388, in gc
May  2 08:32:25 a1 SMGC: [19336]     _gc(None, srUuid, dryRun)
May  2 08:32:25 a1 SMGC: [19336]   File "/opt/xensource/sm/cleanup.py", line 3267, in _gc
May  2 08:32:25 a1 SMGC: [19336]     sr = SR.getInstance(srUuid, session)
May  2 08:32:25 a1 SMGC: [19336]   File "/opt/xensource/sm/cleanup.py", line 1552, in getInstance
May  2 08:32:25 a1 SMGC: [19336]     return FileSR(uuid, xapi, createLock, force)
May  2 08:32:25 a1 SMGC: [19336]   File "/opt/xensource/sm/cleanup.py", line 2334, in __init__
May  2 08:32:25 a1 SMGC: [19336]     SR.__init__(self, uuid, xapi, createLock, force)
May  2 08:32:25 a1 SMGC: [19336]   File "/opt/xensource/sm/cleanup.py", line 1582, in __init__
May  2 08:32:25 a1 SMGC: [19336]     raise util.SMException("SR %s not attached on this host" % uuid)
May  2 08:32:25 a1 SMGC: [19336]
May  2 08:32:25 a1 SMGC: [19336] *~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*
May  2 08:32:25 a1 SMGC: [19336] * * * * * SR 42535e39-4c98-22c6-71eb-303caa3fc97b: ERROR
May  2 08:32:25 a1 SMGC: [19336]
May  2 08:32:25 a1 SM: [19367] lock: opening lock file /var/lock/sm/42535e39-4c98-22c6-71eb-303caa3fc97b/sr
May  2 08:32:25 a1 SM: [19367] sr_update {'sr_uuid': '42535e39-4c98-22c6-71eb-303caa3fc97b', 'subtask_of': 'DummyRef:|f960ef27-5d11-461d-9d4f-072e24be96b0|SR.stat', 'args': [], 'host_ref': 'OpaqueRef:a26b109a-5fb2-4644-8e7e-3cc251e43d5c', 'session_ref': 'OpaqueRef:220b9035-d899-4882-9627-bd6d4adb9e9c', 'device_config': {'device': '/dev/disk/by-id/scsi-36f4ee0807844d90068121b9321711e05', 'SRmaster': 'true'}, 'command': 'sr_update', 'sr_ref': 'OpaqueRef:339f5ed6-e8e6-4421-932f-d64f7ef60013'}
May  2 08:32:25 a1 SM: [19387] lock: opening lock file /var/lock/sm/42535e39-4c98-22c6-71eb-303caa3fc97b/sr
May  2 08:32:25 a1 SM: [19387] lock: acquired /var/lock/sm/42535e39-4c98-22c6-71eb-303caa3fc97b/sr
May  2 08:32:25 a1 SM: [19387] sr_scan {'sr_uuid': '42535e39-4c98-22c6-71eb-303caa3fc97b', 'subtask_of': 'DummyRef:|9e3a3942-fb33-46d2-bb01-5463e16ff9a1|SR.scan', 'args': [], 'host_ref': 'OpaqueRef:a26b109a-5fb2-4644-8e7e-3cc251e43d5c', 'session_ref': 'OpaqueRef:7e309bfe-889c-464e-b6ed-949d9e4adfb5', 'device_config': {'device': '/dev/disk/by-id/scsi-36f4ee0807844d90068121b9321711e05', 'SRmaster': 'true'}, 'command': 'sr_scan', 'sr_ref': 'OpaqueRef:339f5ed6-e8e6-4421-932f-d64f7ef60013'}
May  2 08:32:25 a1 SM: [19387] ['/usr/bin/vhd-util', 'scan', '-f', '-m', '/var/run/sr-mount/42535e39-4c98-22c6-71eb-303caa3fc97b/*.vhd']
May  2 08:32:25 a1 SM: [19387]   pread SUCCESS
May  2 08:32:25 a1 SM: [19387] ['ls', '/var/run/sr-mount/42535e39-4c98-22c6-71eb-303caa3fc97b', '-1', '--color=never']
May  2 08:32:25 a1 SM: [19387]   pread SUCCESS
May  2 08:32:25 a1 SM: [19387] lock: opening lock file /var/lock/sm/42535e39-4c98-22c6-71eb-303caa3fc97b/running
May  2 08:32:25 a1 SM: [19387] lock: tried lock /var/lock/sm/42535e39-4c98-22c6-71eb-303caa3fc97b/running, acquired: True (exists: True)
May  2 08:32:25 a1 SM: [19387] lock: released /var/lock/sm/42535e39-4c98-22c6-71eb-303caa3fc97b/running
May  2 08:32:25 a1 SM: [19387] Kicking GC
May  2 08:32:25 a1 SMGC: [19387] === SR 42535e39-4c98-22c6-71eb-303caa3fc97b: gc ===
May  2 08:32:25 a1 SMGC: [19398] Will finish as PID [19399]
May  2 08:32:25 a1 SM: [19399] lock: opening lock file /var/lock/sm/42535e39-4c98-22c6-71eb-303caa3fc97b/running
May  2 08:32:25 a1 SM: [19399] lock: opening lock file /var/lock/sm/42535e39-4c98-22c6-71eb-303caa3fc97b/gc_active
May  2 08:32:25 a1 SMGC: [19387] New PID [19398]
May  2 08:32:25 a1 SM: [19387] lock: released /var/lock/sm/42535e39-4c98-22c6-71eb-303caa3fc97b/sr
May  2 08:32:25 a1 SM: [19399] lock: opening lock file /var/lock/sm/42535e39-4c98-22c6-71eb-303caa3fc97b/sr
May  2 08:32:25 a1 SMGC: [19399] Found 0 cache files
May  2 08:32:25 a1 SM: [19399] lock: tried lock /var/lock/sm/42535e39-4c98-22c6-71eb-303caa3fc97b/gc_active, acquired: True (exists: True)
May  2 08:32:25 a1 SM: [19399] lock: tried lock /var/lock/sm/42535e39-4c98-22c6-71eb-303caa3fc97b/sr, acquired: True (exists: True)
May  2 08:32:25 a1 SM: [19399] ['/usr/bin/vhd-util', 'scan', '-f', '-m', '/var/run/sr-mount/42535e39-4c98-22c6-71eb-303caa3fc97b/*.vhd']
May  2 08:32:25 a1 SM: [19399]   pread SUCCESS
May  2 08:32:25 a1 SMGC: [19399] SR 4253 ('Local Main Name storage') (0 VDIs in 0 VHD trees): no changes
May  2 08:32:25 a1 SM: [19399] lock: released /var/lock/sm/42535e39-4c98-22c6-71eb-303caa3fc97b/sr
May  2 08:32:25 a1 SMGC: [19399] No work, exiting
May  2 08:32:25 a1 SMGC: [19399] GC process exiting, no work left
May  2 08:32:25 a1 SM: [19399] lock: released /var/lock/sm/42535e39-4c98-22c6-71eb-303caa3fc97b/gc_active
May  2 08:32:25 a1 SMGC: [19399] In cleanup
May  2 08:32:25 a1 SMGC: [19399] SR 4253 ('Local Main Name storage') (0 VDIs in 0 VHD trees): no changes
May  2 08:32:25 a1 SM: [19432] lock: opening lock file /var/lock/sm/42535e39-4c98-22c6-71eb-303caa3fc97b/sr
May  2 08:32:25 a1 SM: [19432] sr_update {'sr_uuid': '42535e39-4c98-22c6-71eb-303caa3fc97b', 'subtask_of': 'DummyRef:|3207059a-03f4-42a3-bd23-23d226386b08|SR.stat', 'args': [], 'host_ref': 'OpaqueRef:a26b109a-5fb2-4644-8e7e-3cc251e43d5c', 'session_ref': 'OpaqueRef:1a441cae-d761-45b0-a025-c2ca371f0639', 'device_config': {'device': '/dev/disk/by-id/scsi-36f4ee0807844d90068121b9321711e05', 'SRmaster': 'true'}, 'command': 'sr_update', 'sr_ref': 'OpaqueRef:339f5ed6-e8e6-4421-932f-d64f7ef60013'}
May  2 08:32:25 a1 SM: [19449] lock: opening lock file /var/lock/sm/42535e39-4c98-22c6-71eb-303caa3fc97b/sr
May  2 08:32:25 a1 SM: [19449] sr_update {'sr_uuid': '42535e39-4c98-22c6-71eb-303caa3fc97b', 'subtask_of': 'DummyRef:|6486f365-f80b-427e-afc5-e5c8cc1b4931|SR.stat', 'args': [], 'host_ref': 'OpaqueRef:a26b109a-5fb2-4644-8e7e-3cc251e43d5c', 'session_ref': 'OpaqueRef:1501190d-78ae-4d34-9b7f-bc6fb1103494', 'device_config': {'device': '/dev/disk/by-id/scsi-36f4ee0807844d90068121b9321711e05', 'SRmaster': 'true'}, 'command': 'sr_update', 'sr_ref': 'OpaqueRef:339f5ed6-e8e6-4421-932f-d64f7ef60013'}
</code></pre>
]]></description><link>https://xcp-ng.org/forum/post/92555</link><guid isPermaLink="true">https://xcp-ng.org/forum/post/92555</guid><dc:creator><![CDATA[yllar]]></dc:creator><pubDate>Fri, 02 May 2025 06:57:58 GMT</pubDate></item><item><title><![CDATA[Reply to LargeBlockSR for 4KiB blocksize disks on Sat, 26 Apr 2025 08:38:37 GMT]]></title><description><![CDATA[<p dir="auto">I don't see any reason it couldn't, but I prefer <a class="plugin-mentions-user plugin-mentions-a" href="/forum/user/dthenot" aria-label="Profile: dthenot">@<bdi>dthenot</bdi></a> to answer <img src="https://xcp-ng.org/forum/assets/plugins/nodebb-plugin-emoji/emoji/android/1f642.png?v=e4fb0e60dbd" class="not-responsive emoji emoji-android emoji--slightly_smiling_face" style="height:23px;width:auto;vertical-align:middle" title=":)" alt="🙂" /></p>
]]></description><link>https://xcp-ng.org/forum/post/92342</link><guid isPermaLink="true">https://xcp-ng.org/forum/post/92342</guid><dc:creator><![CDATA[olivierlambert]]></dc:creator><pubDate>Sat, 26 Apr 2025 08:38:37 GMT</pubDate></item><item><title><![CDATA[Reply to LargeBlockSR for 4KiB blocksize disks on Fri, 25 Apr 2025 20:18:32 GMT]]></title><description><![CDATA[<p dir="auto"><a class="plugin-mentions-user plugin-mentions-a" href="/forum/user/dthenot" aria-label="Profile: dthenot">@<bdi>dthenot</bdi></a> Are multiple SR-s with type=largeblock supported on the same host?</p>
]]></description><link>https://xcp-ng.org/forum/post/92328</link><guid isPermaLink="true">https://xcp-ng.org/forum/post/92328</guid><dc:creator><![CDATA[yllar]]></dc:creator><pubDate>Fri, 25 Apr 2025 20:18:32 GMT</pubDate></item><item><title><![CDATA[Reply to LargeBlockSR for 4KiB blocksize disks on Wed, 09 Apr 2025 13:53:47 GMT]]></title><description><![CDATA[<p dir="auto"><a class="plugin-mentions-user plugin-mentions-a" href="/forum/user/stormi" aria-label="Profile: stormi">@<bdi>stormi</bdi></a></p>
<p dir="auto">Thanks, I was wondering if that was the case from reading the other threads.</p>
]]></description><link>https://xcp-ng.org/forum/post/91792</link><guid isPermaLink="true">https://xcp-ng.org/forum/post/91792</guid><dc:creator><![CDATA[Greg_E]]></dc:creator><pubDate>Wed, 09 Apr 2025 13:53:47 GMT</pubDate></item><item><title><![CDATA[Reply to LargeBlockSR for 4KiB blocksize disks on Tue, 08 Apr 2025 14:29:47 GMT]]></title><description><![CDATA[<p dir="auto"><a class="plugin-mentions-user plugin-mentions-a" href="/forum/user/greg_e" aria-label="Profile: Greg_E">@<bdi>Greg_E</bdi></a> It is a separate SR type that you can select when you create a SR in Xen Orchestra.</p>
<p dir="auto">It's also just local SR. For NFS you still use the NFS storage driver.</p>
]]></description><link>https://xcp-ng.org/forum/post/91747</link><guid isPermaLink="true">https://xcp-ng.org/forum/post/91747</guid><dc:creator><![CDATA[stormi]]></dc:creator><pubDate>Tue, 08 Apr 2025 14:29:47 GMT</pubDate></item><item><title><![CDATA[Reply to LargeBlockSR for 4KiB blocksize disks on Tue, 08 Apr 2025 14:24:58 GMT]]></title><description><![CDATA[<p dir="auto"><a class="plugin-mentions-user plugin-mentions-a" href="/forum/user/stormi" aria-label="Profile: stormi">@<bdi>stormi</bdi></a></p>
<p dir="auto">Is there a way to enable this on an SR without using the xe cli? Can we specify this as an option during SR creation from XO? Or did this become the normal way to create an SR? I have both 8.2.x and 8.3, Truenas Scale and NFS shares on both.</p>
]]></description><link>https://xcp-ng.org/forum/post/91746</link><guid isPermaLink="true">https://xcp-ng.org/forum/post/91746</guid><dc:creator><![CDATA[Greg_E]]></dc:creator><pubDate>Tue, 08 Apr 2025 14:24:58 GMT</pubDate></item><item><title><![CDATA[Reply to LargeBlockSR for 4KiB blocksize disks on Mon, 01 Jul 2024 19:59:52 GMT]]></title><description><![CDATA[<p dir="auto"><a class="plugin-mentions-user plugin-mentions-a" href="/forum/user/dthenot" aria-label="Profile: dthenot">@<bdi>dthenot</bdi></a> said in <a href="/forum/post/78413">LargeBlockSR for 4KiB blocksize disks</a>:</p>
<blockquote>
<p dir="auto">Hello again,</p>
<p dir="auto">It is now available in <strong>8.2.1</strong> with the testing packages, you can install them by enabling the testing repository and updating.<br />
Available in sm 2.30.8-10.2.</p>
<pre><code>yum update --enablerepo=xcp-ng-testing sm xapi-core xapi-xe xapi-doc
</code></pre>
<p dir="auto">You then need to restart the toolstack.<br />
Afterwards, you can create SR with the command in the above post.</p>
</blockquote>
<p dir="auto">Update: now the driver is available on any up to date XCP-ng 8.2.1 or 8.3. No need to try to update from testing repositories (you might get something unexpected).</p>
]]></description><link>https://xcp-ng.org/forum/post/79545</link><guid isPermaLink="true">https://xcp-ng.org/forum/post/79545</guid><dc:creator><![CDATA[stormi]]></dc:creator><pubDate>Mon, 01 Jul 2024 19:59:52 GMT</pubDate></item><item><title><![CDATA[Reply to LargeBlockSR for 4KiB blocksize disks on Fri, 07 Jun 2024 14:44:53 GMT]]></title><description><![CDATA[<p dir="auto">Hello again,</p>
<p dir="auto">It is now available in <strong>8.2.1</strong> with the testing packages, you can install them by enabling the testing repository and updating.<br />
Available in sm 2.30.8-10.2.</p>
<pre><code>yum update --enablerepo=xcp-ng-testing sm xapi-core xapi-xe xapi-doc
</code></pre>
<p dir="auto">You then need to restart the toolstack.<br />
Afterwards, you can create SR with the command in the above post.</p>
]]></description><link>https://xcp-ng.org/forum/post/78413</link><guid isPermaLink="true">https://xcp-ng.org/forum/post/78413</guid><dc:creator><![CDATA[dthenot]]></dc:creator><pubDate>Fri, 07 Jun 2024 14:44:53 GMT</pubDate></item></channel></rss>