Hello,
I'm a new developer on XCP-ng, I'll work on the Xen side to improve performance.
I'm a newly graduated of University of Versailles Saint-Quentin with a specialty in parallel computing and HPC, I have a big interest in operating systems.
Hello,
I'm a new developer on XCP-ng, I'll work on the Xen side to improve performance.
I'm a newly graduated of University of Versailles Saint-Quentin with a specialty in parallel computing and HPC, I have a big interest in operating systems.
Hello,
As some of you may know, there is currently a problem with disks with blocksize of 4KiB not being compatible to be a SR disk.
It is an error with the vhd-util utilities that is not easily fixed.
As such, we quickly developed a SMAPI driver using losetup ability to emulate another sector size to be able to workaround the problem for the moment.
The real solution will involve SMAPIv3, which the first driver is available to test: https://xcp-ng.org/blog/2024/04/19/first-smapiv3-driver-is-available-in-preview/
To go back to the LargeBlock driver, it is available in 8.3 in sm 3.0.12-12.2.
To set it up, it is as simple as creating a EXT SR with xe CLI but with type=largeblock.
xe sr-create host-uuid=<host UUID> type=largeblock name-label="LargeBlock SR" device-config:device=/dev/nvme0n1
It does not support using multiple devices because of quirks with LVM and the EXT SR driver.
It automatically creates a loop device with a sector size of 512b on top of the 4KiB device and then creates a EXT SR on top of this emulated device.
This driver is a workaround, we have automated tests but they can't catch all things.
If you have any feedbacks or problems, don't hesitate to share here 
@ph7 It's only enabled for the two yum command with the --enablerepo explicitly used.
It's disabled in the config otherwise.
No need to do anything 
@gb.123 Hello,
The instruction in the first post are still the way to go 
@Andrew Hello,
I have been able to find the problem and make a fix, it's in the process of being packaged.
I can confirm it only happen for file based SR when using purge snapshots.
For some reason, the vdi type of CBT_metadata is cbtlog for FileSR but stays the image format it was for LVMSR
And it would make a condition fail during the list_changed_blocks call.
@olivierlambert In 8.2 yes, linstor sm version is separated, it's not the case in 8.3 anymore.
@ccooke Hello,
We have a fix, we are aiming to validate it rapidly so it shouldn't happen for others.
Thank you for reporting the issue. I'll update the thread again when the update is available and it should be safe for other people going through here to update using the RPU.
For people testing the QCOW2 preview, please be informed that you need to update with the QCOW2 repo enabled, if you install the new non QCOW2 version, you risk QCOW2 VDI being dropped from XAPI database until you have installed it and re-scanned the SR.
Dropping from XAPI means losing name-label, description and worse, the links to a VM for these VDI.
There should be a blktap, sm and sm-fairlock update of the same version as above in the QCOW2 repo.
If you have correctly added the QCOW2 repo linked here: https://xcp-ng.org/forum/post/90287
You can update like this:
yum clean metadata --enablerepo=xcp-ng-testing,xcp-ng-qcow2
yum update --enablerepo=xcp-ng-testing,xcp-ng-qcow2
reboot
blktap: 3.55.4-1.1.0.qcow2.1.xcpng8.3sm: 3.2.12-3.1.0.qcow2.1.xcpng8.3Hello again,
The updates have been made available and RPU with xostor should be safe to run 
https://xcp-ng.org/blog/2026/05/07/may-2026-updates-2-for-xcp-ng-8-3-lts/
@ccooke Hello,
We have a fix, we are aiming to validate it rapidly so it shouldn't happen for others.
Thank you for reporting the issue. I'll update the thread again when the update is available and it should be safe for other people going through here to update using the RPU.
@ccooke Hello,
You should be able to make the XOSTOR SR work again if you update sm and sm-fairlock on the other hosts.
yum update sm sm-fairlock
Then you should be able to re-plug the SR on the master and proceed with the RPU.
@IgorGlock Hello,
Could you share the exception that should be in /var/log/SMlog?
@Andrew Hello,
I have been able to find the problem and make a fix, it's in the process of being packaged.
I can confirm it only happen for file based SR when using purge snapshots.
For some reason, the vdi type of CBT_metadata is cbtlog for FileSR but stays the image format it was for LVMSR
And it would make a condition fail during the list_changed_blocks call.
@Andrew Hello Andrew,
Thank you for reporting.
It appear that the CBT on FileSR-based SR is not working in addition to data-destroy (the option that allow to remove the VDI content and only keep the CBT).
Can you confirm that you are using a FileSR (ext or nfs)?
Is it possible to disable purge data on the CR job?
@acebmxer Hello,
The error VDI_CBT_ENABLED means that the XAPI doesn't want to move the VDI to not break the CBT chain.
You can disable the CBT on the VDI before migrating the VDI but if you have snapshots with CBT enabled it can be complicated and it might necessitate to remove them before moving the VDI.
We have changes planned to improve the CBT handling in this kind of case.
@ovicz Hello,
From what I saw in your logs, you have a non QCOW2 sm version, it made the QCOW2 VDIs not available to the storage stack and the XAPI lost them.
If you update again while enabling the QCOW2 repo:
yum update --enablerepo=xcp-ng-testing,xcp-ng-candidates,xcp-ng-qcow2
A SR scan will make the VDI available to the XAPI. Though you will have to identify them and connect them to the VM manually, since this information was lost.
@nuentes Hello,
Following an IA seem to be dangerous already, no need for Skynet 
There is a documentation part about regenerating the initrd: https://docs.xcp-ng.org/troubleshooting/common-problems/#initrd-is-missing-after-an-update
You can likely used what you did above to mount the XCP-ng FS and then regenerate the initrd using this command.
It's not an initramfs that you need to generate but a initrd 