• XOSTOR hyperconvergence preview

    Pinned Moved
    446
    5 Votes
    446 Posts
    454k Views
    BlueToastB

    @Danp Success this this - thanks for the assist. πŸ™‚ Executed with great success:

    yum install xcp-ng-linstor yum install xcp-ng-release-linstor ./install --disks /dev/nvme0n1 --thin
  • XOSTOR on 8.3?

    18
    0 Votes
    18 Posts
    885 Views
    OhSoNoobO

    @ronan-a I am very interested in implementing XOSTOR on XCP-NG 8.3 in my production environment. Currently running two nodes but I can expand to 3. How can I help to create a stable release of XOSTOR? I am not a developer but maybe I can provide feedback.

  • Any negative about using a bonded vlan interface for xostor traffic?

    1
    0 Votes
    1 Posts
    53 Views
    No one has replied
  • XOSTOR from source

    8
    0 Votes
    8 Posts
    1k Views
    olivierlambertO

    Yes. Meaning we are sure it was correctly installed on supported hosts. This limits the possible outcomes if there's a problem (a bit like XOA vs XO sources, but we have like 10 years of feedback from XO sources, so we can do community support in here with a relative confidence)

  • XOSTOR and mdadm software RAID

    6
    0 Votes
    6 Posts
    388 Views
    J

    @OhSoNoob I've used XOSTOR on top of MDRAID and it seemed to work well for me during my testing. I ran tests of it on top of MD RAID 1, 5, and 10 (MDRAID's "RAID 10" which isn't really RAID 10) and had good luck with it. The XOSTOR is really adding a second layer of redundancy at that point, similar to MDRAID 5+1 builds so is almost overkill. Almost.

    Where I see the most benefit from XOSTOR on MDRAID would be on top of RAID 10 or RAID 0 arrays. Depending on the speed of your drives, you might get some benefit from the increased read speed (and read/write speed for RAID 0). In addition, RAID 10 would give you some additional redundancy so that losing a drive wouldn't mean the loss of that node for XOSTOR's purposes, possibly making recovery easier.

    The ability for some redundancy might also be useful for a stretched cluster or some other situation where your network links between XOSTOR nodes isn't as fast as it should be; The ability to recover at the RAID level might be much faster than recovering or rebuilding an entire node over a slow link.

    @ronan-a, I'm not sure if you remember, but the very first test of XOSTOR I ran, shortly after it was introduced,, were on top of RAID 10 arrays. I kept that test cluster alive and running until equipment failure (failed motherboards, nothing related to XOSTOR or MDRAID) forced me to scrap it. I had similar teething pains to others while XOSTOR was being developed and debugged during the test phase, but nothing related to running on top of MDRAID as far as I could tell.

  • How to manage XOSTOR SRs (add/remove)

    1
    0 Votes
    1 Posts
    166 Views
    No one has replied
  • XOSTOR Performance

    8
    0 Votes
    8 Posts
    544 Views
    olivierlambertO

    Not only but that's where it's more visible.

  • Newbie questions

    2
    0 Votes
    2 Posts
    217 Views
    olivierlambertO

    Hi,

    Because you can enter a scenario where some XOSTOR packages might require a restart without the rest. We have planned to detect that and make the RPU algorithm a bit different in the order of operations πŸ™‚

  • Three-node Networking for XOSTOR

    15
    0 Votes
    15 Posts
    1k Views
    H

    @ronan-a
    Unfortunately, I am in the process of reinstalling XCP-ng on the nodes to start from scratch. Just thought I have tried too many things and somewhere forgot to undo the β€˜wrong’ configs. So can’t run the command now. Although I had run this command before when I posted all the screenshots. The output had 2 entries (from my memory):

    1. StltCon <mgmt_ip> 3366 Plain 2. <storage_nw_ip> 3366 Plain

    I will repost with the required data when I get everything configured again.

    Thanks.

  • Removing xcp-persistent-database resource

    1
    0 Votes
    1 Posts
    135 Views
    No one has replied
  • XCP-ng host error - unable to create any VMs

    10
    0 Votes
    10 Posts
    676 Views
    DanpD

    @fatek XOSTOR isn't currently compatible with the 8.3 beta, so you need to use XCP 8.2.1 if you want to use XOSTOR now.

  • XOSTOR SR_BACKEND_FAILURE_78 VDI Creation failed

    3
    0 Votes
    3 Posts
    400 Views
    F

    I had a similar error.
    I gave up & have decided to wait for the official 8.3 xcp-ng release that should/will support XOSTOR 1.0

    https://xcp-ng.org/forum/post/77160

  • Let's Test the HA

    33
    0 Votes
    33 Posts
    7k Views
    nikadeN

    @BHellman said in Let's Test the HA:

    Disclaimer: I work for LINBIT, makers of DRBD and LINSTOR the tech behind XOSTOR.

    We can do highly available iscsi targets, we even have guides on our website that take you step by step to do it. These would be outside of XCP-NG, but would serve the same purpose.

    If there is any interest from Vates to integrate DRBD/HA into XCP-NG, we're always open to discussions.

    Sounds interesting for pretty much everyone comming from VMware vSAN running SQL Server in a failover-cluster if im allowed to jump the gun πŸ™‚

  • XOSTOR 4 node - storage network

    2
    0 Votes
    2 Posts
    257 Views
    F

    So, yes, I had to create the interface again.

    Live migration on a 100gig network is looking pretty sweet!

  • XOSTOR 4 node trial: not yet!

    Solved
    9
    0 Votes
    9 Posts
    516 Views
    F

    Ticket has been updated. Thanks to Support.
    XOSTOR cluster has been successfully created.

  • XOA xostor creation

    3
    0 Votes
    3 Posts
    297 Views
  • XOSTOR to 8.3?

    2
    0 Votes
    2 Posts
    576 Views
    D

    Looking forward to this too. Can't wait to test on my 8.3 cluster. 😁

  • ZFS

    3
    1 Votes
    3 Posts
    451 Views
    H

    Understandable, I forgot that this is built ontop of LINTSTOR and not on DRBD directly.
    I belive then this post by LINBIT is more appropiate.

    Stacked Block Storage in LINBIT SDS (aka LINSTOR)

    This stack which is outlined seems interesting as it means we could for exampel use ZFS as volume manager and create a mirror-1 zpool for fast access, and raid-z2 or zfs draid for the slow access. Utilizing LINSTOR and DRBD for distributing this across the cluster. And utilizing bcache to unify these two zpools for using the fast/slow tier.

    I understand that right now the storage controller api of xcp-ng and xo is not setup to utilize such a setup. but could I potentially setup a ext4 or zfs filesystem ontop of this stack and use the already existing storage controller infrastructure, or would I be losing out on either performance and/or features by doing so.

  • Migrate disk to linstor fails

    1
    0 Votes
    1 Posts
    247 Views
    No one has replied
  • Multiple Volumes Alternate Primary

    8
    0 Votes
    8 Posts
    335 Views
    ronan-aR

    @David I think the complexity is to be able to offer a simple interface / API way for users to configure multiple storages. Maybe through smapi v3.
    In any case we currently only support one storage pool, the sm driver would have to be reworked to support several. It also probably requires visibility from XOA's point of view. Lots of points to discuss. I will create a card on our internal Kanban.

    Regarding multiple XOSTOR SRs, we must:

    Add a way to move the controller volume on a specific storage pool. Ensure the controller it is still accessible for remaining SRs despite the destruction of an SR. Use a lock mechanism to protect the LINSTOR env.