• XOSTOR hyperconvergence preview

    Pinned Moved
    457
    1
    5 Votes
    457 Posts
    532k Views
    H
    Hello, I plan to install my XOSTOR cluster on a pool of 7 nodes with 3 replicas, but not all nodes at once because disks are in use. consider: node1 node2 node ... node 5 node 6 node 7. with 2 disks on each sda: 128GB for the OS sdb: 1TB for local sr ( for now ) I emptied node 6 & 7. so, here is what i plan to do: On ALL NODES: setup linstor packages Run the install script on node 6 & 7 to add their disks so: node6# install.sh --disks /dev/sdb node7# install.sh --disks /dev/sdb Then, configure the SR and the linstor plugin manager as the following xe sr-create \ type=linstor name-label=pool-01 \ host-uuid=XXXX \ device-config:group-name=linstor_group/thin_device device-config:redundancy=3 shared=true device-config:provisioning=thin Normally, i should have a linstor cluster running of 2 nodes ( 2 satellite and one controller randomly placed ) with only 2 disks and then, only 2/3 working replicas. The cluster SHOULD be usable ( i'm right on this point ? ) The next step, would be to move VM from node 5 on it to evacuate node 5. and then add it to the cluster by the following node5# install.sh --disks /dev/sdb node5# xe host-call-plugin \ host-uuid=node5-uuid \ plugin=linstor-manager \ fn=addHost args:groupName=linstor_group/thin_device That should deploy satelite on node 5 and add the disk. I normally should have 3/3 working replicas and can start to deploy others nodes progressively. I'm right on the process ? aS mentionned in the discord, i will post my feedbacks and results from my setup once i finalized it. ( maybe thought a blog post somewhere ). Thanks to provide xostor in opensource, it's clearly the missing piece for this virtualization stack in opensource ( vs proxmox )
  • Recovery from lost node in HA

    3
    0 Votes
    3 Posts
    41 Views
    H
    @olivierlambert No, For once, i followed the installation step carefully ^^'
  • XOSTOR on 8.3?

    xostor xcp-ng 8.3
    35
    0 Votes
    35 Posts
    3k Views
    olivierlambertO
    Sorry I forgot to publish in here the news: https://xcp-ng.org/blog/2025/06/16/xcp-ng-8-3-is-now-lts/ Indeed, since June the 16th, XOSTOR is available on 8.3
  • Recovery from lost node

    Solved
    5
    0 Votes
    5 Posts
    192 Views
    olivierlambertO
    Excellent news, thanks!
  • Matching volume/resource/lvm on disk to VDI/VHD?

    3
    0 Votes
    3 Posts
    144 Views
    dthenotD
    @cmd Hello, It's described here in the documentation https://docs.xcp-ng.org/xostor/#map-linstor-resource-names-to-xapi-vdi-uuids It might be possible to add a parameter in the sm-config of the VDI to ease this link, I'll put a card in our backlog to see if it's doable.
  • Talos K8s Cluster with XOSTOR

    4
    0 Votes
    4 Posts
    281 Views
    T
    @nathanael-h Thanks for the feedback.
  • Adding a node to xostor

    3
    0 Votes
    3 Posts
    215 Views
    J
    @olivierlambert I did open a ticket but thought I would post here as well to see if anyone had insights. Thanks.
  • XOSTOR as shared storage for VDIs?

    4
    0 Votes
    4 Posts
    221 Views
    olivierlambertO
    Have you read the doc first? https://docs.xcp-ng.org/xostor/ This gives a nice overview on how it works
  • XOSTOR 8.3 controller crash with guest OSes shutting down filesystem

    8
    1
    0 Votes
    8 Posts
    393 Views
    D
    @ronan-a [...] 64 bytes from 172.27.18.161: icmp_seq=21668 ttl=64 time=0.805 ms 64 bytes from 172.27.18.161: icmp_seq=21669 ttl=64 time=0.737 ms 64 bytes from 172.27.18.161: icmp_seq=21670 ttl=64 time=0.750 ms 64 bytes from 172.27.18.161: icmp_seq=21671 ttl=64 time=0.780 ms 64 bytes from 172.27.18.161: icmp_seq=21672 ttl=64 time=0.774 ms 64 bytes from 172.27.18.161: icmp_seq=21673 ttl=64 time=0.737 ms 64 bytes from 172.27.18.161: icmp_seq=21674 ttl=64 time=0.773 ms 64 bytes from 172.27.18.161: icmp_seq=21675 ttl=64 time=0.835 ms 64 bytes from 172.27.18.161: icmp_seq=21676 ttl=64 time=0.755 ms 1004711/1004716 packets, 0% loss, min/avg/ewma/max = 0.712/1.033/0.775/195.781 ms I am attaching simple ping stats for last 11 days. I don't think we can blame the network
  • Support Status of XOSTOR

    2
    0 Votes
    2 Posts
    248 Views
    DanpD
    Hi, XOSTOR on XCP-ng 8.2.1 has been supported since it was released approx 9 months ago. XOSTOR on XCP-ng 8.3 is still in the beta phase, so not officially supported. Regards, Dan
  • Any negative about using a bonded vlan interface for xostor traffic?

    1
    0 Votes
    1 Posts
    180 Views
    No one has replied
  • XOSTOR from source

    8
    0 Votes
    8 Posts
    2k Views
    olivierlambertO
    Yes. Meaning we are sure it was correctly installed on supported hosts. This limits the possible outcomes if there's a problem (a bit like XOA vs XO sources, but we have like 10 years of feedback from XO sources, so we can do community support in here with a relative confidence)
  • XOSTOR and mdadm software RAID

    6
    0 Votes
    6 Posts
    863 Views
    J
    @OhSoNoob I've used XOSTOR on top of MDRAID and it seemed to work well for me during my testing. I ran tests of it on top of MD RAID 1, 5, and 10 (MDRAID's "RAID 10" which isn't really RAID 10) and had good luck with it. The XOSTOR is really adding a second layer of redundancy at that point, similar to MDRAID 5+1 builds so is almost overkill. Almost. Where I see the most benefit from XOSTOR on MDRAID would be on top of RAID 10 or RAID 0 arrays. Depending on the speed of your drives, you might get some benefit from the increased read speed (and read/write speed for RAID 0). In addition, RAID 10 would give you some additional redundancy so that losing a drive wouldn't mean the loss of that node for XOSTOR's purposes, possibly making recovery easier. The ability for some redundancy might also be useful for a stretched cluster or some other situation where your network links between XOSTOR nodes isn't as fast as it should be; The ability to recover at the RAID level might be much faster than recovering or rebuilding an entire node over a slow link. @ronan-a, I'm not sure if you remember, but the very first test of XOSTOR I ran, shortly after it was introduced,, were on top of RAID 10 arrays. I kept that test cluster alive and running until equipment failure (failed motherboards, nothing related to XOSTOR or MDRAID) forced me to scrap it. I had similar teething pains to others while XOSTOR was being developed and debugged during the test phase, but nothing related to running on top of MDRAID as far as I could tell.
  • How to manage XOSTOR SRs (add/remove)

    1
    0 Votes
    1 Posts
    282 Views
    No one has replied
  • XOSTOR Performance

    8
    0 Votes
    8 Posts
    1k Views
    olivierlambertO
    Not only but that's where it's more visible.
  • Newbie questions

    2
    0 Votes
    2 Posts
    356 Views
    olivierlambertO
    Hi, Because you can enter a scenario where some XOSTOR packages might require a restart without the rest. We have planned to detect that and make the RPU algorithm a bit different in the order of operations
  • Three-node Networking for XOSTOR

    15
    0 Votes
    15 Posts
    2k Views
    H
    @ronan-a Unfortunately, I am in the process of reinstalling XCP-ng on the nodes to start from scratch. Just thought I have tried too many things and somewhere forgot to undo the ‘wrong’ configs. So can’t run the command now. Although I had run this command before when I posted all the screenshots. The output had 2 entries (from my memory): 1. StltCon <mgmt_ip> 3366 Plain 2. <storage_nw_ip> 3366 Plain I will repost with the required data when I get everything configured again. Thanks.
  • Removing xcp-persistent-database resource

    1
    0 Votes
    1 Posts
    274 Views
    No one has replied
  • XCP-ng host error - unable to create any VMs

    10
    0 Votes
    10 Posts
    1k Views
    DanpD
    @fatek XOSTOR isn't currently compatible with the 8.3 beta, so you need to use XCP 8.2.1 if you want to use XOSTOR now.
  • XOSTOR SR_BACKEND_FAILURE_78 VDI Creation failed

    3
    0 Votes
    3 Posts
    851 Views
    F
    I had a similar error. I gave up & have decided to wait for the official 8.3 xcp-ng release that should/will support XOSTOR 1.0 https://xcp-ng.org/forum/post/77160