• XOSTOR hyperconvergence preview

    Pinned Moved
    461
    1
    6 Votes
    461 Posts
    785k Views
    henri9813H
    Hello @ronan-a , but how recover from this situation ? Thanks !
  • SR.Scan performance withing XOSTOR

    4
    0 Votes
    4 Posts
    58 Views
    D
    From another post I gathered that there is an auto-scan feature that run by default every 30 seconds which seems to cause a lot issue when the storage contains a lot of disks or you have a lot of storage. It is not completely clear if this auto-scan feature is actually necessary and to some customers Vates helpdesk has suggested to reduce the frequency of the scan from 30 seconds to 2 minutes and that seems to have improved the overall experience. The command would be this: xe host-param-set other-config:auto-scan-interval=120 uuid=<Host UUID> where UUID is the pool master UUID. Of course I won't run that in production without Vates support re-assurance that doing so it won't have a negative impact but I think is worth mentioning this. In my situation I can see how frequents scan would cause delay on the other tasks considering that effectively my system is always under scanning with probably the scan task itself being affected by it.
  • QCOW2 support on XOSTOR

    3
    0 Votes
    3 Posts
    163 Views
    ronan-aR
    @TestForEcho No ETA for now. Even before supporting QCOW2 on LINSTOR, we have several points to robustify (HA performance, potential race conditions, etc.). Regarding other important points: Supporting volumes larger than 2TB has significant impacts on synchronization, RAM usage, coalesce, etc. We need to find a way to cope with these changes. The coalesce algorithm should be changed to no longer depend on the write speed to the SR in order to prevent potential coalesce interruptions; this is even more crucial for LINSTOR. The coalesce behavior is not exactly the same for QCOW2, and we believe that currently this could negatively impact the API impl in the case of LINSTOR. In short: QCOW2 has led to changes that require a long-term investment in several topics before even considering supporting this format on XOSTOR.
  • XOSTOR on 8.3?

    xostor xcp-ng 8.3
    37
    0 Votes
    37 Posts
    8k Views
    olivierlambertO
    You can manually install XOSTOR yes.
  • XOSTOR / XOA down

    4
    0 Votes
    4 Posts
    325 Views
    P
    @bdenv-r are you trying XOSTOR on 8.2 or 8.3 ? had many problem of performance and tap-drive locking my VDIs on 8.2 xostor would like to know if 8.3 xostor still have issues
  • XOSTOR 2 node Diskfull et 1 node Diskless

    3
    0 Votes
    3 Posts
    286 Views
    F
    @olivierlambert Thanks !
  • Multiple disks groups

    6
    0 Votes
    6 Posts
    2k Views
    henri9813H
    Hello, @DustinB The https://vates.tech/xostor/ says: The maximum size of any single Virtual Disk Image (VDI) will always be limited by the smallest disk in your cluster. But in this case, maybe it can be stored in the "2TB disks" ? Maybe others can answer, i didn't test it.
  • XOSTOR Global network disruption test

    1
    1
    1 Votes
    1 Posts
    206 Views
    No one has replied
  • Unable to add new node to pool using XOSTOR

    10
    0 Votes
    10 Posts
    1k Views
    henri9813H
    Hello, I tried on a new pool. a little different scenario since i don't create xostor for now, on my previous example, i tried to add a node as replacement of an existing one.. I just run the install script only on node 1. When i try make node2 join the pool, i reproduce the incompatible sm error i got previously. The things which is "bizarre", is i don't have the license issue i got on Xen-orchestra. ( maybe it was finally not related ? ) Here is the complete logs. pool.mergeInto { "sources": [ "17510fe0-db23-9414-f3df-2941bd34f8dc" ], "target": "cc91fcdc-c7a8-a44c-65b3-a76dced49252", "force": true } { "code": "POOL_JOINING_SM_FEATURES_INCOMPATIBLE", "params": [ "OpaqueRef:090b8da1-9654-066c-84f9-7ab15cb101fd", "" ], "call": { "duration": 1061, "method": "pool.join_force", "params": [ "* session id *", "<MASTER_IP>", "root", "* obfuscated *" ] }, "message": "POOL_JOINING_SM_FEATURES_INCOMPATIBLE(OpaqueRef:090b8da1-9654-066c-84f9-7ab15cb101fd, )", "name": "XapiError", "stack": "XapiError: POOL_JOINING_SM_FEATURES_INCOMPATIBLE(OpaqueRef:090b8da1-9654-066c-84f9-7ab15cb101fd, ) at Function.wrap (file:///etc/xen-orchestra/packages/xen-api/_XapiError.mjs:16:12) at file:///etc/xen-orchestra/packages/xen-api/transports/json-rpc.mjs:38:21 at runNextTicks (node:internal/process/task_queues:60:5) at processImmediate (node:internal/timers:454:9) at process.callbackTrampoline (node:internal/async_hooks:130:17)" }```
  • Backup fail whereas xostor cluster is "healthy"

    4
    1
    0 Votes
    4 Posts
    488 Views
    henri9813H
    Hello @ronan-a I will reproduce the case, i will re-destroy one hypervisor and retrigger the case. Thank you @ronan-a et @olivierlambert If you need me to tests some special case don't hesit, we have a pool dedicated for this
  • Recovery from lost node in HA

    3
    0 Votes
    3 Posts
    322 Views
    henri9813H
    @olivierlambert No, For once, i followed the installation step carefully ^^'
  • Recovery from lost node

    Solved
    5
    0 Votes
    5 Posts
    663 Views
    olivierlambertO
    Excellent news, thanks!
  • Matching volume/resource/lvm on disk to VDI/VHD?

    3
    0 Votes
    3 Posts
    516 Views
    dthenotD
    @cmd Hello, It's described here in the documentation https://docs.xcp-ng.org/xostor/#map-linstor-resource-names-to-xapi-vdi-uuids It might be possible to add a parameter in the sm-config of the VDI to ease this link, I'll put a card in our backlog to see if it's doable.
  • Talos K8s Cluster with XOSTOR

    4
    0 Votes
    4 Posts
    931 Views
    T
    @nathanael-h Thanks for the feedback.
  • Adding a node to xostor

    3
    0 Votes
    3 Posts
    527 Views
    J
    @olivierlambert I did open a ticket but thought I would post here as well to see if anyone had insights. Thanks.
  • XOSTOR as shared storage for VDIs?

    4
    0 Votes
    4 Posts
    567 Views
    olivierlambertO
    Have you read the doc first? https://docs.xcp-ng.org/xostor/ This gives a nice overview on how it works
  • XOSTOR 8.3 controller crash with guest OSes shutting down filesystem

    8
    1
    0 Votes
    8 Posts
    990 Views
    D
    @ronan-a [...] 64 bytes from 172.27.18.161: icmp_seq=21668 ttl=64 time=0.805 ms 64 bytes from 172.27.18.161: icmp_seq=21669 ttl=64 time=0.737 ms 64 bytes from 172.27.18.161: icmp_seq=21670 ttl=64 time=0.750 ms 64 bytes from 172.27.18.161: icmp_seq=21671 ttl=64 time=0.780 ms 64 bytes from 172.27.18.161: icmp_seq=21672 ttl=64 time=0.774 ms 64 bytes from 172.27.18.161: icmp_seq=21673 ttl=64 time=0.737 ms 64 bytes from 172.27.18.161: icmp_seq=21674 ttl=64 time=0.773 ms 64 bytes from 172.27.18.161: icmp_seq=21675 ttl=64 time=0.835 ms 64 bytes from 172.27.18.161: icmp_seq=21676 ttl=64 time=0.755 ms 1004711/1004716 packets, 0% loss, min/avg/ewma/max = 0.712/1.033/0.775/195.781 ms I am attaching simple ping stats for last 11 days. I don't think we can blame the network
  • Support Status of XOSTOR

    2
    0 Votes
    2 Posts
    342 Views
    DanpD
    Hi, XOSTOR on XCP-ng 8.2.1 has been supported since it was released approx 9 months ago. XOSTOR on XCP-ng 8.3 is still in the beta phase, so not officially supported. Regards, Dan
  • Any negative about using a bonded vlan interface for xostor traffic?

    1
    0 Votes
    1 Posts
    333 Views
    No one has replied
  • XOSTOR from source

    8
    0 Votes
    8 Posts
    3k Views
    olivierlambertO
    Yes. Meaning we are sure it was correctly installed on supported hosts. This limits the possible outcomes if there's a problem (a bit like XOA vs XO sources, but we have like 10 years of feedback from XO sources, so we can do community support in here with a relative confidence)