XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Home
    2. dthenot
    Offline
    • Profile
    • Following 0
    • Followers 0
    • Topics 1
    • Posts 35
    • Groups 4

    dthenot

    @dthenot

    Vates 🪐 XCP-ng Team
    57
    Reputation
    44
    Profile views
    35
    Posts
    0
    Followers
    0
    Following
    Joined
    Last Online

    dthenot Unfollow Follow
    Storage Team Vates 🪐 XCP-ng Team Admin

    Best posts made by dthenot

    • RE: New XCP-ng developers

      Hello,

      I'm a new developer on XCP-ng, I'll work on the Xen side to improve performance.
      I'm a newly graduated of University of Versailles Saint-Quentin with a specialty in parallel computing and HPC, I have a big interest in operating systems.

      posted in News
      dthenotD
      dthenot
    • LargeBlockSR for 4KiB blocksize disks

      Hello,

      As some of you may know, there is currently a problem with disks with blocksize of 4KiB not being compatible to be a SR disk.
      It is an error with the vhd-util utilities that is not easily fixed.
      As such, we quickly developed a SMAPI driver using losetup ability to emulate another sector size to be able to workaround the problem for the moment.

      The real solution will involve SMAPIv3, which the first driver is available to test: https://xcp-ng.org/blog/2024/04/19/first-smapiv3-driver-is-available-in-preview/

      To go back to the LargeBlock driver, it is available in 8.3 in sm 3.0.12-12.2.

      To set it up, it is as simple as creating a EXT SR with xe CLI but with type=largeblock.

      xe sr-create host-uuid=<host UUID> type=largeblock name-label="LargeBlock SR" device-config:device=/dev/nvme0n1
      

      It does not support using multiple devices because of quirks with LVM and the EXT SR driver.

      It automatically creates a loop device with a sector size of 512b on top of the 4KiB device and then creates a EXT SR on top of this emulated device.

      This driver is a workaround, we have automated tests but they can't catch all things.
      If you have any feedbacks or problems, don't hesitate to share here 🙂

      posted in News
      dthenotD
      dthenot
    • RE: XCP-ng 8.3 updates announcements and testing

      @ph7 It's only enabled for the two yum command with the --enablerepo explicitly used.
      It's disabled in the config otherwise.
      No need to do anything 🙂

      posted in News
      dthenotD
      dthenot
    • RE: CBT: the thread to centralize your feedback

      @olivierlambert I am 🙂

      posted in Backup
      dthenotD
      dthenot
    • RE: XCP-ng 8.3 updates announcements and testing

      For people testing the QCOW2 preview, please be informed that you need to update with the QCOW2 repo enabled, if you install the new non QCOW2 version, you risk QCOW2 VDI being dropped from XAPI database until you have installed it and re-scanned the SR.
      Dropping from XAPI means losing name-label, description and worse, the links to a VM for these VDI.
      There should be a blktap, sm and sm-fairlock update of the same version as above in the QCOW2 repo.

      If you have correctly added the QCOW2 repo linked here: https://xcp-ng.org/forum/post/90287

      You can update like this:

      yum clean metadata --enablerepo=xcp-ng-testing,xcp-ng-qcow2
      yum update --enablerepo=xcp-ng-testing,xcp-ng-qcow2
      reboot
      

      Versions:

      • blktap: 3.55.4-1.1.0.qcow2.1.xcpng8.3
      • sm: 3.2.12-3.1.0.qcow2.1.xcpng8.3
      posted in News
      dthenotD
      dthenot
    • RE: XCP-ng 8.3 betas and RCs feedback 🚀

      @jhansen
      Hello,
      I created a thread about the NBD issue where VBD are left connected to Dom0.
      I added what I already know about the situation on my side.
      Anyone observing the error can help us by sharing what they observed in the thread.

      https://xcp-ng.org/forum/topic/9864/vdi-staying-connected-to-dom0-when-using-nbd-backups

      Thanks

      posted in News
      dthenotD
      dthenot
    • RE: LargeBlockSR for 4KiB blocksize disks

      Hello again,

      It is now available in 8.2.1 with the testing packages, you can install them by enabling the testing repository and updating.
      Available in sm 2.30.8-10.2.

      yum update --enablerepo=xcp-ng-testing sm xapi-core xapi-xe xapi-doc
      

      You then need to restart the toolstack.
      Afterwards, you can create SR with the command in the above post.

      posted in News
      dthenotD
      dthenot
    • RE: Best CPU performance settings for HP DL325/AMD EPYC servers?

      @olivierlambert @S-Pam Indeed, it's normal, Dom0 doesn't see the NUMA information and the hypervisor handle the compute and memory allocation. You can see the wiki about manipulating VM allocation with the NUMA architecture if you want. But in normal use-cases it's not worth the effort.

      posted in Compute
      dthenotD
      dthenot
    • RE: Recommended CPU Scheduler / Topology ?

      Hello 🙂

      So, you can of course makes some config by hand to alleviate some of the cost of the architecture on virtualization.
      But like you can imagine, the scheduler will move the vCPU around and sometimes break the L3 locality if it move it to a remote core.
      I asked to someone more informed than me about that and he said that running a vCPU is always better than trying to make it run locally so it's only useful under specific condition (having enough resources).

      You can use the cpupool functionality to isolate VM on a specific NUMA node.
      But it's only interesting if you really want more performance since it's a manual process, and can be cumbersome.

      You can also pin vCPU on a specific physical core to keep L3 locality, but it would only work if you have little amount of VM running on that particular core. So yes, it might be a little gain (or even a loss).

      There is multiple ways to make the core pinned, most with xl but if you want it to stick between VM reboot you need to use xe. Especially since if you want to pin a VM to a node and need it's memory being allocated on that node, since it can only be done at boot time. Pinning vCPU after boot using xl can create problem if you pin it on a node and the VM memory is allocated on a another node.

      You can see the VM NUMA memory information with the command xl debug-key u; xl dmesg.

      With xl:
      Pin a CPU:

      xl vcpu-pin <Domain> <vcpu id> <cpu id>
      

      e.g. : xl vcpu-pin 1 all 2-5 to pin all the vCPU of the VM 1 to core 2 to 5.

      With CPUPool:

      xl cpupool-numa-split # Will create a cpupool by NUMA node
      xl cpupool-migrate <VM> <Pool>
      

      (CPUPool only works for guest, not dom0)

      And with xe:

      xe vm-param-set uuid=<UUID> VCPUs-params:mask=<mask> #To add a pinning
      xe vm-param-remove uuid=<UUID> param-name=VCPUs-params param-key=mask #To remove pinning
      

      The mask above is CPU id separated with comma e.g. 0,1,2,3

      Hope I could be useful, I will add that to the XCP-ng documentation soon 🙂

      posted in Compute
      dthenotD
      dthenot
    • RE: Matching volume/resource/lvm on disk to VDI/VHD?

      @cmd Hello,

      It's described here in the documentation https://docs.xcp-ng.org/xostor/#map-linstor-resource-names-to-xapi-vdi-uuids
      It might be possible to add a parameter in the sm-config of the VDI to ease this link, I'll put a card in our backlog to see if it's doable.

      posted in XOSTOR
      dthenotD
      dthenot

    Latest posts made by dthenot

    • RE: VDI Chain on Deltas

      @nvoss Could you try to run vhd-util check -n /var/run/sr-mount/f23aacc2-d566-7dc6-c9b0-bc56c749e056/3a3e915f-c903-4434-a2f0-cfc89bbe96bf.vhd?

      posted in Backup
      dthenotD
      dthenot
    • RE: VDI Chain on Deltas

      @nvoss Hello, The UNDO LEAF-COEALESCE usually has a cause that is listed in the error above it. Could you share this part please? 🙂

      posted in Backup
      dthenotD
      dthenot
    • RE: LargeBlockSR for 4KiB blocksize disks

      @yllar Maybe it was because of the loopdevice not being completely created indeed.
      No error for this GC run.

      Everything should be ok then 🙂

      posted in News
      dthenotD
      dthenot
    • RE: LargeBlockSR for 4KiB blocksize disks

      @yllar

      Sorry, I missed the first ping.

      May  2 08:31:40 a1 SM: [18985] ['/sbin/vgs', '--readonly', 'VG_XenStorage-07ab18c4-a76f-d1fc-4374-babfe21fd679']
      May  2 08:32:24 a1 SM: [18985]   pread SUCCESS
      May  2 08:32:24 a1 SM: [18985] ***** Long LVM call of 'vgs' took 43.6255850792
      

      That would explain why it took a long time to create. 43 seconds for a call to vgs.
      Can you try to do a vgs call yourself on your host?
      Does it take a long time?

      This exception is "normal":

      May  2 08:32:25 a1 SMGC: [19336] *~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*
      May  2 08:32:25 a1 SMGC: [19336]          ***********************
      May  2 08:32:25 a1 SMGC: [19336]          *  E X C E P T I O N  *
      May  2 08:32:25 a1 SMGC: [19336]          ***********************
      May  2 08:32:25 a1 SMGC: [19336] gc: EXCEPTION <class 'util.SMException'>, SR 42535e39-4c98-22c6-71eb-303caa3fc97b not attached on this host
      May  2 08:32:25 a1 SMGC: [19336]   File "/opt/xensource/sm/cleanup.py", line 3388, in gc
      May  2 08:32:25 a1 SMGC: [19336]     _gc(None, srUuid, dryRun)
      May  2 08:32:25 a1 SMGC: [19336]   File "/opt/xensource/sm/cleanup.py", line 3267, in _gc
      May  2 08:32:25 a1 SMGC: [19336]     sr = SR.getInstance(srUuid, session)
      May  2 08:32:25 a1 SMGC: [19336]   File "/opt/xensource/sm/cleanup.py", line 1552, in getInstance
      May  2 08:32:25 a1 SMGC: [19336]     return FileSR(uuid, xapi, createLock, force)
      May  2 08:32:25 a1 SMGC: [19336]   File "/opt/xensource/sm/cleanup.py", line 2334, in __init__
      May  2 08:32:25 a1 SMGC: [19336]     SR.__init__(self, uuid, xapi, createLock, force)
      May  2 08:32:25 a1 SMGC: [19336]   File "/opt/xensource/sm/cleanup.py", line 1582, in __init__
      May  2 08:32:25 a1 SMGC: [19336]     raise util.SMException("SR %s not attached on this host" % uuid)
      May  2 08:32:25 a1 SMGC: [19336]
      May  2 08:32:25 a1 SMGC: [19336] *~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*
      May  2 08:32:25 a1 SMGC: [19336] * * * * * SR 42535e39-4c98-22c6-71eb-303caa3fc97b: ERROR
      May  2 08:32:25 a1 SMGC: [19336]
      

      It's the garbage collector trying to run on the SR but it is in the process of attaching.
      It's weird though because it's the call to sr_attach that launched the GC.
      Does the GC run normally on this SR on next attempts?

      Otherwise, I don't see anything worrying the logs you shared.
      It should be safe to use 🙂

      posted in News
      dthenotD
      dthenot
    • RE: Matching volume/resource/lvm on disk to VDI/VHD?

      @cmd Hello,

      It's described here in the documentation https://docs.xcp-ng.org/xostor/#map-linstor-resource-names-to-xapi-vdi-uuids
      It might be possible to add a parameter in the sm-config of the VDI to ease this link, I'll put a card in our backlog to see if it's doable.

      posted in XOSTOR
      dthenotD
      dthenot
    • RE: non-zero exit, , File "/opt/xensource/sm/EXTSR", line 78 except util.CommandException, inst: ^ SyntaxError: invalid syntax

      @FMOTrust Hello,

      Good news you found the problem.
      Yes, in XCP-ng 8.3 python should point to a 2.7.5 version while python3 will point to 3.6.8 at the moment.
      I imagine you are on 8.2.1 though since the smapi is running in python 3 in 8.3.
      While the smapi is python2 only on 8.2.1 and so will expect python to point to the 2.7.5 version.

      posted in Backup
      dthenotD
      dthenot
    • RE: non-zero exit, , File "/opt/xensource/sm/EXTSR", line 78 except util.CommandException, inst: ^ SyntaxError: invalid syntax

      @FMOTrust Hello,

      Could you give us the output of yum info sm please?

      posted in Backup
      dthenotD
      dthenot
    • RE: XCP-ng 8.3 updates announcements and testing

      For people testing the QCOW2 preview, please be informed that you need to update with the QCOW2 repo enabled, if you install the new non QCOW2 version, you risk QCOW2 VDI being dropped from XAPI database until you have installed it and re-scanned the SR.
      Dropping from XAPI means losing name-label, description and worse, the links to a VM for these VDI.
      There should be a blktap, sm and sm-fairlock update of the same version as above in the QCOW2 repo.

      If you have correctly added the QCOW2 repo linked here: https://xcp-ng.org/forum/post/90287

      You can update like this:

      yum clean metadata --enablerepo=xcp-ng-testing,xcp-ng-qcow2
      yum update --enablerepo=xcp-ng-testing,xcp-ng-qcow2
      reboot
      

      Versions:

      • blktap: 3.55.4-1.1.0.qcow2.1.xcpng8.3
      • sm: 3.2.12-3.1.0.qcow2.1.xcpng8.3
      posted in News
      dthenotD
      dthenot
    • RE: Issue with SR and coalesce

      Hi, this XAPI plugin multi is called on another host but is failing with IOError.
      It's doing a few things on a host related to LVM handling.
      It's failing on one of them, you should look into the one having the error to have the full error in SMlog of the host.
      The plugin itself is located in /etc/xapi.d/plugins/on-slave, it's the function named multi.

      posted in Backup
      dthenotD
      dthenot
    • RE: XCP-ng 8.3 updates announcements and testing

      @ph7 It's only enabled for the two yum command with the --enablerepo explicitly used.
      It's disabled in the config otherwise.
      No need to do anything 🙂

      posted in News
      dthenotD
      dthenot