XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Home
    2. dthenot
    Offline
    • Profile
    • Following 0
    • Followers 0
    • Topics 1
    • Posts 58
    • Groups 4

    dthenot

    @dthenot

    Vates 🪐 XCP-ng Team
    102
    Reputation
    77
    Profile views
    58
    Posts
    0
    Followers
    0
    Following
    Joined
    Last Online

    dthenot Unfollow Follow
    Storage Team Vates 🪐 XCP-ng Team Admin

    Best posts made by dthenot

    • RE: New XCP-ng developers

      Hello,

      I'm a new developer on XCP-ng, I'll work on the Xen side to improve performance.
      I'm a newly graduated of University of Versailles Saint-Quentin with a specialty in parallel computing and HPC, I have a big interest in operating systems.

      posted in News
      dthenotD
      dthenot
    • LargeBlockSR for 4KiB blocksize disks

      Hello,

      As some of you may know, there is currently a problem with disks with blocksize of 4KiB not being compatible to be a SR disk.
      It is an error with the vhd-util utilities that is not easily fixed.
      As such, we quickly developed a SMAPI driver using losetup ability to emulate another sector size to be able to workaround the problem for the moment.

      The real solution will involve SMAPIv3, which the first driver is available to test: https://xcp-ng.org/blog/2024/04/19/first-smapiv3-driver-is-available-in-preview/

      To go back to the LargeBlock driver, it is available in 8.3 in sm 3.0.12-12.2.

      To set it up, it is as simple as creating a EXT SR with xe CLI but with type=largeblock.

      xe sr-create host-uuid=<host UUID> type=largeblock name-label="LargeBlock SR" device-config:device=/dev/nvme0n1
      

      It does not support using multiple devices because of quirks with LVM and the EXT SR driver.

      It automatically creates a loop device with a sector size of 512b on top of the 4KiB device and then creates a EXT SR on top of this emulated device.

      This driver is a workaround, we have automated tests but they can't catch all things.
      If you have any feedbacks or problems, don't hesitate to share here 🙂

      posted in News
      dthenotD
      dthenot
    • RE: XCP-ng 8.3 updates announcements and testing

      @ph7 It's only enabled for the two yum command with the --enablerepo explicitly used.
      It's disabled in the config otherwise.
      No need to do anything 🙂

      posted in News
      dthenotD
      dthenot
    • RE: XOSTOR hyperconvergence preview

      @gb.123 Hello,
      The instruction in the first post are still the way to go 🙂

      posted in XOSTOR
      dthenotD
      dthenot
    • RE: XCP-ng 8.3 updates announcements and testing

      @Andrew Hello,

      I have been able to find the problem and make a fix, it's in the process of being packaged.
      I can confirm it only happen for file based SR when using purge snapshots.
      For some reason, the vdi type of CBT_metadata is cbtlog for FileSR but stays the image format it was for LVMSR
      And it would make a condition fail during the list_changed_blocks call.

      posted in News
      dthenotD
      dthenot
    • RE: Possible to reconnect SR automatically?

      @manilx Hi,

      yum install plug-late-sr
      

      Should do the trick to install it 🙂

      posted in Development
      dthenotD
      dthenot
    • RE: Unable to add new node to pool using XOSTOR

      @olivierlambert In 8.2 yes, linstor sm version is separated, it's not the case in 8.3 anymore.

      posted in XOSTOR
      dthenotD
      dthenot
    • RE: XOSTOR appears to be broken on the new XCP-NG May 2026 update

      @ccooke Hello,

      We have a fix, we are aiming to validate it rapidly so it shouldn't happen for others.
      Thank you for reporting the issue. I'll update the thread again when the update is available and it should be safe for other people going through here to update using the RPU.

      posted in XOSTOR
      dthenotD
      dthenot
    • RE: CBT: the thread to centralize your feedback

      @olivierlambert I am 🙂

      posted in Backup
      dthenotD
      dthenot
    • RE: XCP-ng 8.3 updates announcements and testing

      For people testing the QCOW2 preview, please be informed that you need to update with the QCOW2 repo enabled, if you install the new non QCOW2 version, you risk QCOW2 VDI being dropped from XAPI database until you have installed it and re-scanned the SR.
      Dropping from XAPI means losing name-label, description and worse, the links to a VM for these VDI.
      There should be a blktap, sm and sm-fairlock update of the same version as above in the QCOW2 repo.

      If you have correctly added the QCOW2 repo linked here: https://xcp-ng.org/forum/post/90287

      You can update like this:

      yum clean metadata --enablerepo=xcp-ng-testing,xcp-ng-qcow2
      yum update --enablerepo=xcp-ng-testing,xcp-ng-qcow2
      reboot
      

      Versions:

      • blktap: 3.55.4-1.1.0.qcow2.1.xcpng8.3
      • sm: 3.2.12-3.1.0.qcow2.1.xcpng8.3
      posted in News
      dthenotD
      dthenot

    Latest posts made by dthenot

    • RE: XOSTOR appears to be broken on the new XCP-NG May 2026 update

      Hello again,

      The updates have been made available and RPU with xostor should be safe to run 🙂
      https://xcp-ng.org/blog/2026/05/07/may-2026-updates-2-for-xcp-ng-8-3-lts/

      posted in XOSTOR
      dthenotD
      dthenot
    • RE: XOSTOR appears to be broken on the new XCP-NG May 2026 update

      @ccooke Hello,

      We have a fix, we are aiming to validate it rapidly so it shouldn't happen for others.
      Thank you for reporting the issue. I'll update the thread again when the update is available and it should be safe for other people going through here to update using the RPU.

      posted in XOSTOR
      dthenotD
      dthenot
    • RE: XOSTOR appears to be broken on the new XCP-NG May 2026 update

      @ccooke Hello,

      You should be able to make the XOSTOR SR work again if you update sm and sm-fairlock on the other hosts.

      yum update sm sm-fairlock
      

      Then you should be able to re-plug the SR on the master and proceed with the RPU.

      posted in XOSTOR
      dthenotD
      dthenot
    • RE: XCP-ng 8.3 updates announcements and testing

      @IgorGlock Hello,

      Could you share the exception that should be in /var/log/SMlog?

      posted in News
      dthenotD
      dthenot
    • RE: XCP-ng 8.3 updates announcements and testing

      @Andrew Hello,

      I have been able to find the problem and make a fix, it's in the process of being packaged.
      I can confirm it only happen for file based SR when using purge snapshots.
      For some reason, the vdi type of CBT_metadata is cbtlog for FileSR but stays the image format it was for LVMSR
      And it would make a condition fail during the list_changed_blocks call.

      posted in News
      dthenotD
      dthenot
    • RE: XCP-ng 8.3 updates announcements and testing

      @Andrew Hello Andrew,

      Thank you for reporting.
      It appear that the CBT on FileSR-based SR is not working in addition to data-destroy (the option that allow to remove the VDI content and only keep the CBT).
      Can you confirm that you are using a FileSR (ext or nfs)?
      Is it possible to disable purge data on the CR job?

      posted in News
      dthenotD
      dthenot
    • RE: XCP-ng 8.3 updates announcements and testing

      @acebmxer Hello,

      The error VDI_CBT_ENABLED means that the XAPI doesn't want to move the VDI to not break the CBT chain.
      You can disable the CBT on the VDI before migrating the VDI but if you have snapshots with CBT enabled it can be complicated and it might necessitate to remove them before moving the VDI.
      We have changes planned to improve the CBT handling in this kind of case.

      posted in News
      dthenotD
      dthenot
    • RE: XCP-ng 8.3 updates announcements and testing

      @rzr Host updated 🙂

      posted in News
      dthenotD
      dthenot
    • RE: XCP-ng 8.3 updates announcements and testing

      @ovicz Hello,

      From what I saw in your logs, you have a non QCOW2 sm version, it made the QCOW2 VDIs not available to the storage stack and the XAPI lost them.
      If you update again while enabling the QCOW2 repo:

      yum update --enablerepo=xcp-ng-testing,xcp-ng-candidates,xcp-ng-qcow2
      

      A SR scan will make the VDI available to the XAPI. Though you will have to identify them and connect them to the VM manually, since this information was lost.

      posted in News
      dthenotD
      dthenot
    • RE: Booting to Dracut (I trusted ChatGPT)

      @nuentes Hello,

      Following an IA seem to be dangerous already, no need for Skynet 😆

      There is a documentation part about regenerating the initrd: https://docs.xcp-ng.org/troubleshooting/common-problems/#initrd-is-missing-after-an-update

      You can likely used what you did above to mount the XCP-ng FS and then regenerate the initrd using this command.
      It's not an initramfs that you need to generate but a initrd 🙂

      posted in XCP-ng
      dthenotD
      dthenot