XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Home
    2. dthenot
    3. Posts
    Offline
    • Profile
    • Following 0
    • Followers 0
    • Topics 1
    • Posts 54
    • Groups 4

    Posts

    Recent Best Controversial
    • RE: XCP-ng 8.3 updates announcements and testing

      @Andrew Hello,

      I have been able to find the problem and make a fix, it's in the process of being packaged.
      I can confirm it only happen for file based SR when using purge snapshots.
      For some reason, the vdi type of CBT_metadata is cbtlog for FileSR but stays the image format it was for LVMSR
      And it would make a condition fail during the list_changed_blocks call.

      posted in News
      dthenotD
      dthenot
    • RE: XCP-ng 8.3 updates announcements and testing

      @Andrew Hello Andrew,

      Thank you for reporting.
      It appear that the CBT on FileSR-based SR is not working in addition to data-destroy (the option that allow to remove the VDI content and only keep the CBT).
      Can you confirm that you are using a FileSR (ext or nfs)?
      Is it possible to disable purge data on the CR job?

      posted in News
      dthenotD
      dthenot
    • RE: XCP-ng 8.3 updates announcements and testing

      @acebmxer Hello,

      The error VDI_CBT_ENABLED means that the XAPI doesn't want to move the VDI to not break the CBT chain.
      You can disable the CBT on the VDI before migrating the VDI but if you have snapshots with CBT enabled it can be complicated and it might necessitate to remove them before moving the VDI.
      We have changes planned to improve the CBT handling in this kind of case.

      posted in News
      dthenotD
      dthenot
    • RE: XCP-ng 8.3 updates announcements and testing

      @rzr Host updated 🙂

      posted in News
      dthenotD
      dthenot
    • RE: XCP-ng 8.3 updates announcements and testing

      @ovicz Hello,

      From what I saw in your logs, you have a non QCOW2 sm version, it made the QCOW2 VDIs not available to the storage stack and the XAPI lost them.
      If you update again while enabling the QCOW2 repo:

      yum update --enablerepo=xcp-ng-testing,xcp-ng-candidates,xcp-ng-qcow2
      

      A SR scan will make the VDI available to the XAPI. Though you will have to identify them and connect them to the VM manually, since this information was lost.

      posted in News
      dthenotD
      dthenot
    • RE: Booting to Dracut (I trusted ChatGPT)

      @nuentes Hello,

      Following an IA seem to be dangerous already, no need for Skynet 😆

      There is a documentation part about regenerating the initrd: https://docs.xcp-ng.org/troubleshooting/common-problems/#initrd-is-missing-after-an-update

      You can likely used what you did above to mount the XCP-ng FS and then regenerate the initrd using this command.
      It's not an initramfs that you need to generate but a initrd 🙂

      posted in XCP-ng
      dthenotD
      dthenot
    • RE: SR_BACKEND_FAILURE_78 when trying to create VDI

      @cmanos No problem, glad I could helped.
      As Olivier also pointed above, it's not an issue anymore when using QCOW2 which is currently in beta, so hopefully it's only a short workaround 🙂

      posted in Management
      dthenotD
      dthenot
    • RE: SR_BACKEND_FAILURE_78 when trying to create VDI

      @cmanos Hello, there is a problem with VHD on 4KiB block size devices, you can use the largeblock SR to workaround the issue:
      https://xcp-ng.org/forum/topic/8901/largeblocksr-for-4kib-blocksize-disks

      posted in Management
      dthenotD
      dthenot
    • RE: Unable to add new node to pool using XOSTOR

      @olivierlambert In 8.2 yes, linstor sm version is separated, it's not the case in 8.3 anymore.

      posted in XOSTOR
      dthenotD
      dthenot
    • RE: SR Garbage Collection running permanently

      @Razor_648 While I was writing my previous message, I have been reminded that there are also issues with LVHDoISCSI SR and CBT, you should disable CBT on your backup job and on all VDI on the SR. It might help with the issue.

      posted in Management
      dthenotD
      dthenot
    • RE: SR Garbage Collection running permanently

      @Razor_648 Hi,

      The log you showed only mean that it couldn't compare two VDI together using their CBT.
      It sometimes happen that a CBT chain become disconnected.

      Disabling leaf-coalesce mean it won't run on leaf, VHD chain will always be 2 depths deep.

      You migrated 200 VMs, every disks of those VMs had snapshot made that then need to be coalesced, it can take a while.
      Your backup then also do a snapshot each time while running that need to be coalesced.

      There are GC in both version of XCP-ng 8.2 and 8.3.
      The GC is run independently of auto-scan, if you really want to disable it you can do it temporarily using /opt/xensource/sm/cleanup.py -x -u <SR UUID> it will stop the GC until you press enter. I guess you could run it in a tmux to make it stop until next reboot. But it would be better to find the problem or if there really is no problem let the GC work until it's finished.
      It's a bit weird to need 15 minutes to take a snapshot, it would point to a problem though.
      Do you have any other error than the CBT one in your SMlog?

      posted in Management
      dthenotD
      dthenot
    • RE: Possible to reconnect SR automatically?

      @manilx Hi,

      yum install plug-late-sr
      

      Should do the trick to install it 🙂

      posted in Development
      dthenotD
      dthenot
    • RE: XCP-ng 8.3 updates announcements and testing

      @bufanda You just need to make sure to have a sm and blktap qcow2 version.
      Otherwise, having a normal sm version would drop the QCOW2 VDI from the XAPI database and you would lose VBD to VM aswell as name of VDI.
      So it could be painful depending on how much you have 😬
      But in the case, you would install a non QCOW2 sm version, you would only lose the QCOW2 from DB, those would not be deleted or anything. Reinstalling a QCOW2 version then rescanning the SR would make them re-appear. But then you would have to identify them again (lost name-label) and relink them to VM.
      We try to keep our QCOW2 version on top of the testing branch of XCP-ng but we could miss it 🙂

      posted in News
      dthenotD
      dthenot
    • RE: XCP-ng 8.3 updates announcements and testing

      @bufanda Hello,

      There is equivalent sm packages in the qcow2 repo for testing, XAPI will be coming soon.
      You can update while enabling the QCOW2 repo to get the sm and blktap QCOW2 version and get the XAPI version letter if you want.

      posted in News
      dthenotD
      dthenot
    • RE: XOSTOR hyperconvergence preview

      @JeffBerntsen That's why I meant, the way to install written in the first post still work in 8.3, the script still work as expected also, it basically only create the VG/LV needed on hosts before you create the SR.

      posted in XOSTOR
      dthenotD
      dthenot
    • RE: XOSTOR hyperconvergence preview

      @gb.123 Hello,
      The instruction in the first post are still the way to go 🙂

      posted in XOSTOR
      dthenotD
      dthenot
    • RE: VDI Chain on Deltas

      @nvoss said in VDI Chain on Deltas:

      What would make the force restart work when the scheduled regular runs dont?

      I'm not sure what you mean.
      The backup need to do a snapshot to have a point to compare before exporting data.
      This snapshot will create a new level of VHD that would need to be coalesced, but it's limiting the number of VHD in the chain so it fails.
      This is caused by the fact that the garbage collector can't run because it can't edit the corrupted VDI.
      Since there is a corrupted VDI it's not running to not create more problem on the VDI chains.
      Sometime corruption mean that we don't know if a VHD has any parent for example, and if doing so we can't know what the chain looks like meaning not knowing what VHD are in what chain in the SR (Storage Repository).

      VDI: Virtual Disk Image in this context
      VHD being the format of VDI we use at the moment in XCP-ng

      After removing the corrupted VDI, maybe automatically by the migration process (maybe you'll have to do it by hand), you can run a sr-scan on the SR and it launch the GC again.

      posted in Backup
      dthenotD
      dthenot
    • RE: VDI Chain on Deltas

      @nvoss No, the GC is blocked because only one VDI is corrupted, the one with the check.
      All other VDI are on a long chain because they couldn't coalesce.
      Sorry, BATMAP is the block allocation table, it's the info of the VHD to know which block exist locally.
      Migrating the VDI might work indeed, I can't really be sure.

      posted in Backup
      dthenotD
      dthenot
    • RE: VDI Chain on Deltas

      @nvoss The VHD is reported corrupted on the batmap. You can try to repair it with vhd-util repair but it'll likely not work.
      I have seen people recover from this kind of error by doing a vdi-copy.
      You could try a VM copy or a VDI copy and link the VDI to the VM again and see if it's alright.
      The corrupted VDI is blocking the garbage collector so the chain are long and that's the error you see on XO side.
      It might be needed to remove the chain by hand to resolve the issue.

      posted in Backup
      dthenotD
      dthenot
    • RE: VDI Chain on Deltas

      @nvoss Could you try to run vhd-util check -n /var/run/sr-mount/f23aacc2-d566-7dc6-c9b0-bc56c749e056/3a3e915f-c903-4434-a2f0-cfc89bbe96bf.vhd?

      posted in Backup
      dthenotD
      dthenot