XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Home
    2. ronan-a
    Offline
    • Profile
    • Following 0
    • Followers 2
    • Topics 1
    • Posts 158
    • Groups 5

    ronan-a

    @ronan-a

    Vates 🪐 XCP-ng Team
    77
    Reputation
    157
    Profile views
    158
    Posts
    2
    Followers
    0
    Following
    Joined
    Last Online

    ronan-a Unfollow Follow
    Storage Team Vates 🪐 XCP-ng Team Global Moderator Admin

    Best posts made by ronan-a

    • RE: XCP-ng team is growing

      Hi everyone!

      Two years ago I worked on many features in the Xen Orchestra project like delta backup algorithm, load balancing, ...
      After all this time, I'm back to contribute to XCP-ng. 🙂

      Currently, I'm working on performance improvements (VM migration, storage...) and on new SMAPIv3 plugins.
      And maybe in the future on other cool stuff. 😉

      Ronan

      posted in News
      ronan-aR
      ronan-a
    • RE: Dev diaries #1: Analyzing storage perf (SMAPIv3)

      qemu-dp: context and parameters

      Here are some new charts. Make sure you understand the global QCOW2 image structure. (See: https://events.static.linuxfound.org/sites/events/files/slides/kvm-forum-2017-slides.pdf)

      ioping.png
      random.png
      sequential.png

      More explicit labels 😉:

      • ext4-ng: qemu-dp with default parameters (O_DIRECT and no-flush)
      • ext4-ng (VHD): tapdisk with VHD (no O_DIRECT + timeout)
      • ext4-ng (Buffer/Flush): no O_DIRECT + flush allowed
      • Cache A: L2-Cache=3MiB
      • Cache B: L2-Cache=6.25MiB
      • Cache C: Entry-Size=8KiB
      • Cache D: Entry-Size=64KiB + no O_DIRECT + flush allowed
      • Cache E: L2-Cache=8MiB + Entry-size=8KiB + no O_DIRECT + flush allowed
      • Cache F: L2-Cache=8MiB + Entry-size=8KiB + no O_DIRECT
      • Cache G: L2-Cache=8MiB + Entry-size=8KiB
      • Cache H: L2-Cache=8MiB + Entry-size=16KiB + no O_DIRECT
      • Cache I: L2-Cache=16MiB + Entry-size=8KiB + no O_DIRECT

      These results where obtained with an optane (nvme). We can see a better random write performance with the F configuration instead of using the default qemu-dp parameters and the ioping is not so bad. But it's not sufficient compared to tapdisk.

      So like said in the previous message, it's the moment to find the bottleneck in the qemu-dp process. 😉

      posted in News
      ronan-aR
      ronan-a
    • RE: Dev diaries #1: Analyzing storage perf (SMAPIv3)

      qemu-dp/tapdisk and CPU Usage per function call

      Thank you to flamegraph. 🙂 (See: http://www.brendangregg.com/FlameGraphs/cpuflamegraphs.html)
      Analysis and improvements in a future message!

      qemu

      out.png

      tapdisk

      tapdisk.png

      posted in News
      ronan-aR
      ronan-a
    • Dev diaries #1: Analyzing storage perf (SMAPIv3)

      SMAPIv3: results and analyze

      After some investigation, it was discovered that the SMAPIv3 is not THE perfect storage interface. Here are some charts to analyze:

      chart3.png

      chart1.png

      chart2.png

      Yeah, there are many storage types:

      • lvm, ext (well known)
      • ext4 (storage type added on SMAPIv1)
      • ext4-ng (a new storage type added on SMAPIv3 for this benchmark and surely available in the future)
      • xfs-ng (same idea but for XFS)

      You can notice the usage of RAID0 with ext4-ng, but it's not important for the moment.

      Let's focus on the performance of ext4-ng/xfs-ng! How can we explain these poor results?! By default the SMAPIv3 plugins like gfs2/filebased added by Citrix use qemu-dp. It is a fork of qemu, it's also a substitute of the tapdisk/VHD environment used to improve performance and remove some limitations like the maximum size supported by the VHD format (2TB). QEMU supports QCow images to break this limitation.

      So, the performance problem of the SMAPIv3 seems related to qemu-dp. And yes... You can see the results of the ext4-ng VHD and ext4-ng VHDv1 plugins, they are very close to the SMAPIv1 measurements:

      • The ext4-ng VHDv1 plugin uses the O_DIRECT flag + a timeout like the SMAPIv1 implementation.
      • The ext4-ng VHD plugin does not use the O_DIRECT flag.

      Next, to validate a potential bottleneck in the qemu-dp process, two RAID0 have been set up (one with 2 disks and an other with 4), and it seems interesting to see a good usage of the physical disk! There is one qemu process for each disk in our VM, and the disk usage is similar of the performance observed in the Dom0.

      For the future

      The SMAPIv3/qemu-dp tuple is not totally a problem:

      • A good scale is visible with the RAID0 benchmark.
      • It's easy to add a new storage type in the SMAPIv3. (Two plugin types: Volume and Datapath automatically detected when added in this system. See: https://xapi-project.github.io/xapi-storage/#learn-architecture)
      • The QCow2 format is a good alternative to break the size limitation of the VHD images.
      • A RAID0 on the SMAPIv1 does not improve the I/O performance contrary to qemu-dp.

      Next steps:

      • Understand how qemu-dp is called (context, parameters, ...).
      • Find the bottleneck in the qemu-dp.
      • Find a solution to improve the performance.
      posted in News
      ronan-aR
      ronan-a
    • RE: XOSTOR hyperconvergence preview

      ⚠ UPDATE AND IMPORTANT INFO ⚠

      I am updating the LINSTOR packages on our repositories.
      This update fixes many issues, especially regarding the HA.

      However, this update is not compatible with the LINSTOR SRs already configured, so it is necessary to DELETE the existing SRs before installing this update.
      We exceptionally allow ourselves to force a reinstallation during this beta, as long as we haven't officially released a production version.
      In theory, this should not happen again.

      To resume:
      1 - Uninstall any existing LINSTOR SR.
      2 - Install the new sm package: "sm-2.30.7-1.3.0.linstor.3.xcpng8.2.x86_64" on all used hosts.
      3 - Reinstall the LINSTOR SR.

      Thank you ! 🙂

      posted in XOSTOR
      ronan-aR
      ronan-a
    • RE: XOSTOR on 8.3?

      fatek Just for information, the current version 8.3 is not usable without major problems. Hower, I rebased recently all the LINSTOR sm changes from XCP-ng 8.2 to 8.3 in a new package: sm-3.2.3-1.7.xcpng8.3.x86_64.rpm, we passed the driver tests without too many problems. This RPM should be available during the month of October. Even after its release, we consider that it is not stable enough for production use until we have enough user feedback (but of course this new RPM is synchronized on all fixes and improvements of version 8.2).

      EDIT: Released on October 25, we originally planned to wait a bit. 🙂

      posted in XOSTOR
      ronan-aR
      ronan-a
    • RE: XOSTOR hyperconvergence preview

      Maelstrom96 Well there is no simple helper to do that using the CLI.

      So you can create a new node using:

      linstor node create --node-type Combined <NAME> <IP>
      

      Then you must evacuate the old node to preserve the replication count:

      linstor node evacuate <OLD_NAME>
      

      Next, you can change your hostname an restart the services on each host:

      systemctl stop linstor-controller
      systemctl restart linstor-satellites
      

      Finally you can delete the node:

      linstor node delete <OLD_NAME>
      

      After that you must recreate the diskless resources if necessary. Exec linstor advise r to see the commands to execute.

      posted in XOSTOR
      ronan-aR
      ronan-a
    • RE: XOSTOR hyperconvergence preview

      BHellman The first post has a FAQ that I update each time I meet users with a common/recurring problem. 😉

      posted in XOSTOR
      ronan-aR
      ronan-a
    • RE: XOSTOR hyperconvergence preview

      @gb-123 You can use this command:

      linstor resource-group modify xcp-sr-linstor_group_thin_device --place-count <NEW_COUNT>
      

      You can confirm the resource group to use with:

      linstor resource-group list
      

      Ignore the default group named: DfltRscGrp and take the second.

      Note: Don't use a replication count greater than 3.

      posted in XOSTOR
      ronan-aR
      ronan-a
    • RE: XCP-ng 8.0.0 Beta now available!

      peder Fixed! This fix will be available (as soon as possible) in a future xcp-emu-manager package.

      posted in News
      ronan-aR
      ronan-a

    Latest posts made by ronan-a

    • RE: Create a VM with an existing iSCSI disk

      nodje said in Create a VM with an existing iSCSI disk:

      Is there anyway to achieve an iSCSI mount without automatic LVM VG creation?

      You can try with the "RawISCSISR" driver but it's not really tested on our side, I can't confirm that it will work. It doesn't support creation of VDIs but you can use it to access existing LUNs on a target.

      posted in Management
      ronan-aR
      ronan-a
    • RE: XOSTOR 8.3 controller crash with guest OSes shutting down filesystem

      Dark199 In practice you should have more info via dmesg or kern.log. I have never seen this error until now, since it impacts VMs, I am afraid it is something quite serious. Are your disks ok? Do you have enough RAM on the Dom-0?

      posted in XOSTOR
      ronan-aR
      ronan-a
    • RE: Unable to enable HA with XOSTOR

      dslauter You can test the new RPMs using the testing repository, FYI: sm-3.2.3-1.14.xcpng8.3 and http-nbd-transfer-1.5.0-1.xcpng8.3.

      posted in Advanced features
      ronan-aR
      ronan-a
    • RE: Unable to enable HA with XOSTOR

      dslauter Just for your information, I will update the http-nbd-transfer + sm in a few weeks. I fixed many issues regarding HA activation in 8.3 due to bad migration of specific python code from version 2 to version 3.

      posted in Advanced features
      ronan-aR
      ronan-a
    • RE: XOSTOR on 8.3?

      fatek We have to release important fixes before the end of the year concerning problems with the HA which I corrected + other changes on the smapi side. I recently discussed that we should release a stable XOSTOR 8.3 version early next year in order to move forward with important projects regarding smapi v3. But I can't be categorical on a date. We lack user feedback.

      posted in XOSTOR
      ronan-aR
      ronan-a
    • RE: Unable to enable HA with XOSTOR

      dslauter Are you using XCP-ng 8.3? If this is the case I think there is a porting problem concerning python 3...

      posted in Advanced features
      ronan-aR
      ronan-a
    • RE: Unable to enable HA with XOSTOR

      dslauter said in Unable to enable HA with XOSTOR:

      I don't see any error here. Can you check the other hosts? And how did you create the SR? Is the shared=true flag set?

      posted in Advanced features
      ronan-aR
      ronan-a
    • RE: Unable to enable HA with XOSTOR

      dslauter Can you check the SMlog/kernel.log/daemon.log traces? Without these details, it is not easy to investigate. Thanks!

      posted in Advanced features
      ronan-aR
      ronan-a
    • RE: XOSTOR hyperconvergence preview

      olivierlambert Jonathon Unfortunately we don't maintain this package, so it's not available in our repositories, the simplest thing is that you address this problem directly to linbit. Maybe there is a regression or something else?

      posted in XOSTOR
      ronan-aR
      ronan-a
    • RE: XOSTOR on 8.3?

      fatek Use directly the XOA method, it correctly installs the dependencies + is more secure regarding the disk selection. 😉

      posted in XOSTOR
      ronan-aR
      ronan-a