XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Home
    2. dthenot
    3. Best
    Offline
    • Profile
    • Following 0
    • Followers 0
    • Topics 1
    • Posts 54
    • Groups 4

    Posts

    Recent Best Controversial
    • RE: New XCP-ng developers

      Hello,

      I'm a new developer on XCP-ng, I'll work on the Xen side to improve performance.
      I'm a newly graduated of University of Versailles Saint-Quentin with a specialty in parallel computing and HPC, I have a big interest in operating systems.

      posted in News
      dthenotD
      dthenot
    • LargeBlockSR for 4KiB blocksize disks

      Hello,

      As some of you may know, there is currently a problem with disks with blocksize of 4KiB not being compatible to be a SR disk.
      It is an error with the vhd-util utilities that is not easily fixed.
      As such, we quickly developed a SMAPI driver using losetup ability to emulate another sector size to be able to workaround the problem for the moment.

      The real solution will involve SMAPIv3, which the first driver is available to test: https://xcp-ng.org/blog/2024/04/19/first-smapiv3-driver-is-available-in-preview/

      To go back to the LargeBlock driver, it is available in 8.3 in sm 3.0.12-12.2.

      To set it up, it is as simple as creating a EXT SR with xe CLI but with type=largeblock.

      xe sr-create host-uuid=<host UUID> type=largeblock name-label="LargeBlock SR" device-config:device=/dev/nvme0n1
      

      It does not support using multiple devices because of quirks with LVM and the EXT SR driver.

      It automatically creates a loop device with a sector size of 512b on top of the 4KiB device and then creates a EXT SR on top of this emulated device.

      This driver is a workaround, we have automated tests but they can't catch all things.
      If you have any feedbacks or problems, don't hesitate to share here 🙂

      posted in News
      dthenotD
      dthenot
    • RE: XCP-ng 8.3 updates announcements and testing

      @ph7 It's only enabled for the two yum command with the --enablerepo explicitly used.
      It's disabled in the config otherwise.
      No need to do anything 🙂

      posted in News
      dthenotD
      dthenot
    • RE: XOSTOR hyperconvergence preview

      @gb.123 Hello,
      The instruction in the first post are still the way to go 🙂

      posted in XOSTOR
      dthenotD
      dthenot
    • RE: XCP-ng 8.3 updates announcements and testing

      @Andrew Hello,

      I have been able to find the problem and make a fix, it's in the process of being packaged.
      I can confirm it only happen for file based SR when using purge snapshots.
      For some reason, the vdi type of CBT_metadata is cbtlog for FileSR but stays the image format it was for LVMSR
      And it would make a condition fail during the list_changed_blocks call.

      posted in News
      dthenotD
      dthenot
    • RE: Possible to reconnect SR automatically?

      @manilx Hi,

      yum install plug-late-sr
      

      Should do the trick to install it 🙂

      posted in Development
      dthenotD
      dthenot
    • RE: Unable to add new node to pool using XOSTOR

      @olivierlambert In 8.2 yes, linstor sm version is separated, it's not the case in 8.3 anymore.

      posted in XOSTOR
      dthenotD
      dthenot
    • RE: CBT: the thread to centralize your feedback

      @olivierlambert I am 🙂

      posted in Backup
      dthenotD
      dthenot
    • RE: XCP-ng 8.3 updates announcements and testing

      For people testing the QCOW2 preview, please be informed that you need to update with the QCOW2 repo enabled, if you install the new non QCOW2 version, you risk QCOW2 VDI being dropped from XAPI database until you have installed it and re-scanned the SR.
      Dropping from XAPI means losing name-label, description and worse, the links to a VM for these VDI.
      There should be a blktap, sm and sm-fairlock update of the same version as above in the QCOW2 repo.

      If you have correctly added the QCOW2 repo linked here: https://xcp-ng.org/forum/post/90287

      You can update like this:

      yum clean metadata --enablerepo=xcp-ng-testing,xcp-ng-qcow2
      yum update --enablerepo=xcp-ng-testing,xcp-ng-qcow2
      reboot
      

      Versions:

      • blktap: 3.55.4-1.1.0.qcow2.1.xcpng8.3
      • sm: 3.2.12-3.1.0.qcow2.1.xcpng8.3
      posted in News
      dthenotD
      dthenot
    • RE: XCP-ng 8.3 betas and RCs feedback 🚀

      @jhansen
      Hello,
      I created a thread about the NBD issue where VBD are left connected to Dom0.
      I added what I already know about the situation on my side.
      Anyone observing the error can help us by sharing what they observed in the thread.

      https://xcp-ng.org/forum/topic/9864/vdi-staying-connected-to-dom0-when-using-nbd-backups

      Thanks

      posted in News
      dthenotD
      dthenot
    • RE: LargeBlockSR for 4KiB blocksize disks

      Hello again,

      It is now available in 8.2.1 with the testing packages, you can install them by enabling the testing repository and updating.
      Available in sm 2.30.8-10.2.

      yum update --enablerepo=xcp-ng-testing sm xapi-core xapi-xe xapi-doc
      

      You then need to restart the toolstack.
      Afterwards, you can create SR with the command in the above post.

      posted in News
      dthenotD
      dthenot
    • RE: XCP-ng 8.3 updates announcements and testing

      @acebmxer Hello,

      The error VDI_CBT_ENABLED means that the XAPI doesn't want to move the VDI to not break the CBT chain.
      You can disable the CBT on the VDI before migrating the VDI but if you have snapshots with CBT enabled it can be complicated and it might necessitate to remove them before moving the VDI.
      We have changes planned to improve the CBT handling in this kind of case.

      posted in News
      dthenotD
      dthenot
    • RE: XCP-ng 8.3 updates announcements and testing

      @ovicz Hello,

      From what I saw in your logs, you have a non QCOW2 sm version, it made the QCOW2 VDIs not available to the storage stack and the XAPI lost them.
      If you update again while enabling the QCOW2 repo:

      yum update --enablerepo=xcp-ng-testing,xcp-ng-candidates,xcp-ng-qcow2
      

      A SR scan will make the VDI available to the XAPI. Though you will have to identify them and connect them to the VM manually, since this information was lost.

      posted in News
      dthenotD
      dthenot
    • RE: Best CPU performance settings for HP DL325/AMD EPYC servers?

      @olivierlambert @S-Pam Indeed, it's normal, Dom0 doesn't see the NUMA information and the hypervisor handle the compute and memory allocation. You can see the wiki about manipulating VM allocation with the NUMA architecture if you want. But in normal use-cases it's not worth the effort.

      posted in Compute
      dthenotD
      dthenot
    • RE: Recommended CPU Scheduler / Topology ?

      Hello 🙂

      So, you can of course makes some config by hand to alleviate some of the cost of the architecture on virtualization.
      But like you can imagine, the scheduler will move the vCPU around and sometimes break the L3 locality if it move it to a remote core.
      I asked to someone more informed than me about that and he said that running a vCPU is always better than trying to make it run locally so it's only useful under specific condition (having enough resources).

      You can use the cpupool functionality to isolate VM on a specific NUMA node.
      But it's only interesting if you really want more performance since it's a manual process, and can be cumbersome.

      You can also pin vCPU on a specific physical core to keep L3 locality, but it would only work if you have little amount of VM running on that particular core. So yes, it might be a little gain (or even a loss).

      There is multiple ways to make the core pinned, most with xl but if you want it to stick between VM reboot you need to use xe. Especially since if you want to pin a VM to a node and need it's memory being allocated on that node, since it can only be done at boot time. Pinning vCPU after boot using xl can create problem if you pin it on a node and the VM memory is allocated on a another node.

      You can see the VM NUMA memory information with the command xl debug-key u; xl dmesg.

      With xl:
      Pin a CPU:

      xl vcpu-pin <Domain> <vcpu id> <cpu id>
      

      e.g. : xl vcpu-pin 1 all 2-5 to pin all the vCPU of the VM 1 to core 2 to 5.

      With CPUPool:

      xl cpupool-numa-split # Will create a cpupool by NUMA node
      xl cpupool-migrate <VM> <Pool>
      

      (CPUPool only works for guest, not dom0)

      And with xe:

      xe vm-param-set uuid=<UUID> VCPUs-params:mask=<mask> #To add a pinning
      xe vm-param-remove uuid=<UUID> param-name=VCPUs-params param-key=mask #To remove pinning
      

      The mask above is CPU id separated with comma e.g. 0,1,2,3

      Hope I could be useful, I will add that to the XCP-ng documentation soon 🙂

      posted in Compute
      dthenotD
      dthenot
    • RE: Booting to Dracut (I trusted ChatGPT)

      @nuentes Hello,

      Following an IA seem to be dangerous already, no need for Skynet 😆

      There is a documentation part about regenerating the initrd: https://docs.xcp-ng.org/troubleshooting/common-problems/#initrd-is-missing-after-an-update

      You can likely used what you did above to mount the XCP-ng FS and then regenerate the initrd using this command.
      It's not an initramfs that you need to generate but a initrd 🙂

      posted in XCP-ng
      dthenotD
      dthenot
    • RE: Matching volume/resource/lvm on disk to VDI/VHD?

      @cmd Hello,

      It's described here in the documentation https://docs.xcp-ng.org/xostor/#map-linstor-resource-names-to-xapi-vdi-uuids
      It might be possible to add a parameter in the sm-config of the VDI to ease this link, I'll put a card in our backlog to see if it's doable.

      posted in XOSTOR
      dthenotD
      dthenot
    • RE: non-zero exit, , File "/opt/xensource/sm/EXTSR", line 78 except util.CommandException, inst: ^ SyntaxError: invalid syntax

      @FMOTrust Hello,

      Could you give us the output of yum info sm please?

      posted in Backup
      dthenotD
      dthenot
    • RE: SR_BACKEND_FAILURE_78 when trying to create VDI

      @cmanos Hello, there is a problem with VHD on 4KiB block size devices, you can use the largeblock SR to workaround the issue:
      https://xcp-ng.org/forum/topic/8901/largeblocksr-for-4kib-blocksize-disks

      posted in Management
      dthenotD
      dthenot
    • RE: XCP-ng 8.3 updates announcements and testing

      @rzr Host updated 🙂

      posted in News
      dthenotD
      dthenot