• XOSTOR hyperconvergence preview

    Pinned Moved
    465
    1
    6 Votes
    465 Posts
    876k Views
    snk33S
    @ronan-a we need to have CBT to use 3rd party backup & replication solution such as Veeam. On the other hand, XOSTOR with VHD disk might be ok using Xen Orchestra backup & replication features but it needs more testing on our end to make sure it works good enough. One big question is about the snapshot chain. Coming from VMware world where keeping snapshots may impact I/O performances, we're not very comfortable with it. Also, we need to find solution for low-RPO replication. On our VMware infrastructure we use Zerto DR for this but XOA being more like Veeam using snapshots, low-RPO will be tough to achieve (unless making snapshots doesn't affect performances at all ?).
  • Ran into a new auth issue with xostor?

    5
    3
    0 Votes
    5 Posts
    81 Views
    J
    @Mathieu-L linstor n l was included in my original post. All nodes were updated to May 2026 Security and Maintenance Updates for XCP-ng 8.3 LTS, all nodes were restarted. May 2026 Updates #2 for XCP-ng 8.3 LTS was released, and a couple days later I installed on all hosts. No host restarted. When xen04 was restarted, that is when this issue happened. I had used systemctl restart linstor-controller here (https://xcp-ng.org/forum/post/105309) to restart the controller.
  • XOSTOR appears to be broken on the new XCP-NG May 2026 update

    8
    0 Votes
    8 Posts
    349 Views
    G
    @dthenot said: @ccooke Hello, You should be able to make the XOSTOR SR work again if you update sm and sm-fairlock on the other hosts. yum update sm sm-fairlock Then you should be able to re-plug the SR on the master and proceed with the RPU. Hello, Had the same problem, the command resolved the issue. It needs to be run on every host. Everything is working fine again. However, I had to complete the pool update manually.
  • XOA loses connection to hosts during VM migration / creation on XOSTOR SR

    1
    0 Votes
    1 Posts
    169 Views
    No one has replied
  • Comparison with Ceph

    2
    0 Votes
    2 Posts
    224 Views
    olivierlambertO
    Different use cases: Ceph is better with more hosts (at least 6 or 7 minimum) while XOSTOR is better between 3 to 7/8. We might have better Ceph support in the future for large clusters.
  • Unable to create XOSTOR volume

    6
    1 Votes
    6 Posts
    444 Views
    SuperDuckGuyS
    @alcoralcor Thanks for the info. I thought maybe I was using too many disks, so I've tried creating disk groups of 3-4 drives with the same issue.
  • Minimums for XOstor disk configuration?

    6
    0 Votes
    6 Posts
    407 Views
    D
    And to really round this out, the MTBF for any of these is in the millions of hours (1.2-3M), that's a use time of 136.968 - 342.46 years respectively. Basically, if a drive dies, just replace it no matter what, but in the end the reliability of these drives is meant to outlast all of us. Unless you actually need some specific function provided in some form-factor or model, don't bother.
  • Reset XOSTOR trial

    2
    0 Votes
    2 Posts
    176 Views
    DanpD
    @tmnguyen You can open a support ticket and request that we reactivate your XOSTOR trial licenses to match your existing XOA trial.
  • 0 Votes
    7 Posts
    713 Views
    I
    @ronan-a Thanks
  • SR.Scan performance withing XOSTOR

    10
    0 Votes
    10 Posts
    1k Views
    nikadeN
    @irtaza9 it scans all the VDI's on the SR to see if something has changed, if there's a need for coalesce and so on. I dont think it will be a big issue if you increase the auto-scan-interval value to lets say 5 minutes (300 seconds), but do remember, that everything regarding the VDI's on the SR will take up to 5 minutes to update, as well as triggering coalesce after removing snapshots.
  • Refresh the XCP-ng + XOSTOR ISO ?

    Solved
    4
    0 Votes
    4 Posts
    457 Views
    S
    That worked; thank you!!!
  • QCOW2 support on XOSTOR

    3
    0 Votes
    3 Posts
    538 Views
    ronan-aR
    @TestForEcho No ETA for now. Even before supporting QCOW2 on LINSTOR, we have several points to robustify (HA performance, potential race conditions, etc.). Regarding other important points: Supporting volumes larger than 2TB has significant impacts on synchronization, RAM usage, coalesce, etc. We need to find a way to cope with these changes. The coalesce algorithm should be changed to no longer depend on the write speed to the SR in order to prevent potential coalesce interruptions; this is even more crucial for LINSTOR. The coalesce behavior is not exactly the same for QCOW2, and we believe that currently this could negatively impact the API impl in the case of LINSTOR. In short: QCOW2 has led to changes that require a long-term investment in several topics before even considering supporting this format on XOSTOR.
  • XOSTOR on 8.3?

    xostor xcp-ng 8.3
    37
    0 Votes
    37 Posts
    11k Views
    olivierlambertO
    You can manually install XOSTOR yes.
  • XOSTOR / XOA down

    4
    0 Votes
    4 Posts
    626 Views
    P
    @bdenv-r are you trying XOSTOR on 8.2 or 8.3 ? had many problem of performance and tap-drive locking my VDIs on 8.2 xostor would like to know if 8.3 xostor still have issues
  • XOSTOR 2 node Diskfull et 1 node Diskless

    3
    0 Votes
    3 Posts
    545 Views
    F
    @olivierlambert Thanks !
  • Multiple disks groups

    6
    0 Votes
    6 Posts
    2k Views
    henri9813H
    Hello, @DustinB The https://vates.tech/xostor/ says: The maximum size of any single Virtual Disk Image (VDI) will always be limited by the smallest disk in your cluster. But in this case, maybe it can be stored in the "2TB disks" ? Maybe others can answer, i didn't test it.
  • XOSTOR Global network disruption test

    1
    1
    1 Votes
    1 Posts
    313 Views
    No one has replied
  • Unable to add new node to pool using XOSTOR

    10
    0 Votes
    10 Posts
    2k Views
    henri9813H
    Hello, I tried on a new pool. a little different scenario since i don't create xostor for now, on my previous example, i tried to add a node as replacement of an existing one.. I just run the install script only on node 1. When i try make node2 join the pool, i reproduce the incompatible sm error i got previously. The things which is "bizarre", is i don't have the license issue i got on Xen-orchestra. ( maybe it was finally not related ? ) Here is the complete logs. pool.mergeInto { "sources": [ "17510fe0-db23-9414-f3df-2941bd34f8dc" ], "target": "cc91fcdc-c7a8-a44c-65b3-a76dced49252", "force": true } { "code": "POOL_JOINING_SM_FEATURES_INCOMPATIBLE", "params": [ "OpaqueRef:090b8da1-9654-066c-84f9-7ab15cb101fd", "" ], "call": { "duration": 1061, "method": "pool.join_force", "params": [ "* session id *", "<MASTER_IP>", "root", "* obfuscated *" ] }, "message": "POOL_JOINING_SM_FEATURES_INCOMPATIBLE(OpaqueRef:090b8da1-9654-066c-84f9-7ab15cb101fd, )", "name": "XapiError", "stack": "XapiError: POOL_JOINING_SM_FEATURES_INCOMPATIBLE(OpaqueRef:090b8da1-9654-066c-84f9-7ab15cb101fd, ) at Function.wrap (file:///etc/xen-orchestra/packages/xen-api/_XapiError.mjs:16:12) at file:///etc/xen-orchestra/packages/xen-api/transports/json-rpc.mjs:38:21 at runNextTicks (node:internal/process/task_queues:60:5) at processImmediate (node:internal/timers:454:9) at process.callbackTrampoline (node:internal/async_hooks:130:17)" }```
  • Backup fail whereas xostor cluster is "healthy"

    4
    1
    0 Votes
    4 Posts
    753 Views
    henri9813H
    Hello @ronan-a I will reproduce the case, i will re-destroy one hypervisor and retrigger the case. Thank you @ronan-a et @olivierlambert If you need me to tests some special case don't hesit, we have a pool dedicated for this
  • Recovery from lost node in HA

    3
    0 Votes
    3 Posts
    574 Views
    henri9813H
    @olivierlambert No, For once, i followed the installation step carefully ^^'