Categories

  • All news regarding Xen and XCP-ng ecosystem

    143 Topics
    4k Posts
    A
    @rzr Always a reboot after big updates, as instructed/required.
  • Everything related to the virtualization platform

    1k Topics
    15k Posts
    G
    This is mostly a thought exercise for discussion, and maybe you've already thought about all this and made choices based on performance or other metrics. Since version 9 is being designed, and it's being designed around Alma 10 as a base, would it make sense to revisit the HCI options? Everyone seems to scream for Rook/Ceph. I've never used it so not sure. Some people want Kubernetes or some container system installed. What about setting up Kubernetes (k3s or rke2) and using it to provide Longhorn as storage? Do I know what this would take to get going? No, not yet. I've been working with Harvester which uses Longhorn V1 or experimentally Longhorn v2. I'm also finding out that you really need to have Kubernetes running and have Rancher running to manage more aspects of Harvester without going to yaml or Helm scripts. So I'm also diving down the must learn a bit of Kubernetes so I can get Rancher so I can use Harvester the way other platforms work. It's a rabbit hole. Downsides of Longhorn or maybe just downsides of Harvester, can not use NFS "out of the box". You need to install the CSI driver for NFS, and then it is only for backups, you can also install S3 and do the same. Longhorn is also fairly slow, even going across a 25gbps connection to an nvme. The local performance of this nvme has been tested to 3GBps with ESXi, so not sure why I can't get faster, but not the point of this discussion. Just though I would bring up another choice that I've never seen discussed, and throw it to the winds for consideration going forward. V9 is so young that major changes could be accomplished if they made functional sense. I know you spent years working with Linbit to get what we have now, and I've not tried this either so hard to say anything good or bad about it.
  • 3k Topics
    28k Posts
    florentF
    @Mark-C this is not forgotten, but file level restore has been under priorised, so it didn't get much work it is now quite high on our backlog , so expect good news in the next months
  • Our hyperconverged storage solution

    44 Topics
    731 Posts
    olivierlambertO
    Different use cases: Ceph is better with more hosts (at least 6 or 7 minimum) while XOSTOR is better between 3 to 7/8. We might have better Ceph support in the future for large clusters.
  • 34 Topics
    102 Posts
    B
    La remarque a été intégrée dans l'article: https://www.myprivatelab.tech/xcp_lab_v2_ha#perte-master Merci encore pour le retour.