XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    Xen Orchestra Container Storage Interface (CSI) for Kubernetes

    Scheduled Pinned Locked Moved Infrastructure as Code
    14 Posts 6 Posters 1.0k Views 5 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • olivierlambertO Offline
      olivierlambert Vates πŸͺ Co-Founder CEO
      last edited by

      I'm not sure to get the question then πŸ€” It's not a technical requirement, it's a design decision.

      1 Reply Last reply Reply Quote 0
      • CyrilleC Offline
        Cyrille Vates πŸͺ DevOps Team @bvitnik
        last edited by

        @bvitnik As Olivier said, it's more of a design decision than a technical requirement. The idea behind using XO is to have a single point of entry, regardless of the number of pools, etc.

        For example, this allows the mapping of Kubernetes regions to Xen Orchestra pools and Kubernetes zones to Xen Orchestra hosts with a single entry point and credentials.

        bvitnikB 1 Reply Last reply Reply Quote 0
        • bvitnikB Offline
          bvitnik @Cyrille
          last edited by

          @Cyrille My concern is that you are closing the door for people that do not need (or do not want) XO in their stack. Maybe they are using other ways to manage the stack, possibly custom developed, and XO would just be one more point of failure, another security concern etc.

          From what I can gather, XO effectively acts as an API proxy here, plus as a list of pools. That's a rather insignificant (and forced?) role, from a technical point of view, considering XO has much much more functionality outside of what XCP-ng and XAPI offer themselves. All of that unused and not required for this integration.

          CyrilleC 1 Reply Last reply Reply Quote 0
          • CyrilleC Offline
            Cyrille Vates πŸͺ DevOps Team @bvitnik
            last edited by

            Actually, it's not a closed door; it's more a door that is opening for people who are already using both Xen Orchestra and Kubernetes.πŸ€”

            From a technical point of view, it makes more sense for us to use XO, because its API is easier to use, especially with the new REST API. For the application side itself, it does many thing that we don't have to deal with. For VDIs, perhaps it's not so much. But for other things such as backups, live migrations, templates and VM creation... it's easier. Moreover, using a unique SDK to develop tools makes sense for our small DevOps team in terms of development speed, stability and security.

            1 Reply Last reply Reply Quote 1
            • olivierlambertO Offline
              olivierlambert Vates πŸͺ Co-Founder CEO
              last edited by olivierlambert

              Again, XCP-ng and Xen Orchestra are really meant to work together: that’s by design. Our goal is to offer a unified stack with one consistent REST API to manage everything, across any number of pools.

              XO already handles a ton of things: auth (with oidc, SAML etc.), multi-pool aggregation, RBAC/ACLs, task tracking, templates, backups, live migrations, etc. By building on top of XO, we can focus on adding real value instead of re-implementing all that logic again in any 3rd party program we maintain in full open source and for free.

              And honestly, I don’t see any issue relying on XO: everything is fully open source, and all features are available for free from the sources, just like it’s always been. Nobody’s forcing you to use one or the other: if you’d rather build directly on XAPI, you absolutely can.

              1 Reply Last reply Reply Quote 2
              • jmaraJ Offline
                jmara
                last edited by

                Great stuff, looking forward and will try the CSI in the next couple of weeks πŸ™‚

                @bvitnik There is an old CSI for xcp-ng (5 years old) which directly talks to the Xen-API, but I'd rather have a middleware which as @olivierlambert already stated has ACL's and security build-in.
                Eventually you will end up in a broken xen cluster because you have a k8s node with cluster wide privileges to the XenAPI.

                @olivierlambert Is there any loose roadmap for the CSI? πŸ™‚

                Cheers,
                Jan

                1 Reply Last reply Reply Quote 0
                • olivierlambertO Offline
                  olivierlambert Vates πŸͺ Co-Founder CEO
                  last edited by

                  The roadmap depends a lot on the feedback we have on it πŸ˜‰ More demand/popular, faster we'll implement stuff πŸ™‚

                  1 Reply Last reply Reply Quote 0
                  • T Offline
                    ThasianXi
                    last edited by

                    My validation of this was not successful; I used the Quick Start PoC.
                    Pods eventually went into CrashLoopBackOff after ErrImagePull and ImagePullBackOff.
                    I created a GitHub token with these permissions: public_repo, read:packages. I also used a token with more permissions (although that was futile) however, I figured at least it required the aforementioned ones.

                    I have since uninstalled via the script but captured the following events from the controller and one of the node pods:

                    kgp -nkube-system | grep csi*
                    
                    csi-xenorchestra-controller-748db9b45b-z26h6             1/3     CrashLoopBackOff   31 (2m31s ago)   77m
                    csi-xenorchestra-node-4jw9z                              1/3     CrashLoopBackOff   18 (42s ago)     77m
                    csi-xenorchestra-node-7wcld                              1/3     CrashLoopBackOff   18 (58s ago)     77m
                    csi-xenorchestra-node-8jrlq                              1/3     CrashLoopBackOff   18 (34s ago)     77m
                    csi-xenorchestra-node-hqwjj                              1/3     CrashLoopBackOff   18 (50s ago)     77m
                    

                    Pod events:
                    csi-xenorchestra-controller-748db9b45b-z26h6

                    Normal  BackOff  3m48s (x391 over 78m)  kubelet  Back-off pulling image "ghcr.io/vatesfr/xenorchestra-csi-driver:edge"
                    

                    csi-xenorchestra-node-4jw9z

                    Normal   BackOff  14m (x314 over 79m)    kubelet  Back-off pulling image "ghcr.io/vatesfr/xenorchestra-csi-driver:edge"
                    Warning  BackOff  4m21s (x309 over 78m)  kubelet  Back-off restarting failed container node-driver-registrar in pod csi-xenorchestra-node-4jw9z_kube-system(b533c28b-1f28-488a-a31e-862117461964)
                    

                    I can deploy again and capture more information if needed.

                    nathanael-hN 1 Reply Last reply Reply Quote 0
                    • nathanael-hN Offline
                      nathanael-h Vates πŸͺ DevOps Team @ThasianXi
                      last edited by

                      @ThasianXi Hello, thanks for the report. It looks like the pull image step fails. Can you test that the token generated from Github is working and allows to pull the image.

                      Maybe a simple test on a docker install could ease the verification:

                      docker login ghcr.io -u USERNAME -p TOKEN
                      docker pull ghcr.io/vatesfr/xenorchestra-csi:v0.0.1
                      

                      Also note that only "Classic" persona access token are supported.

                      More doc here https://docs.github.com/en/packages/working-with-a-github-packages-registry/working-with-the-container-registry#authenticating-with-a-personal-access-token-classic

                      T 1 Reply Last reply Reply Quote 0
                      • T Offline
                        ThasianXi @nathanael-h
                        last edited by ThasianXi

                        @nathanael-h
                        🏁 The image pull was successful to my local computer using the same classic personal access token I generated and set as the regcred secret.
                        2602_ghcr_xocsi.png


                        πŸ’‘
                        Looking at the documentation again and since I am not using MicroK8s, I tried something different but the result was the same. (the pods never transitioned to a running state).

                        This time, prior to executing the install script, I updated the kubelet-registration-path and the volume path in the csi-xenorchestra-node-single.yaml and csi-xenorchestra-node.yaml files.
                        (I believe this would be an opportunity to update the README for clarity on what to update based on the Kubernetes platform i.e. MicroK8s vs non-MicroK8s -- I can submit a PR for this, if you like)
                        excerpts:

                         - --kubelet-registration-path=/var/lib/kubelet/plugins/csi.xenorchestra.vates.tech/csi.sock
                         #- --kubelet-registration-path=/var/snap/microk8s/common/var/lib/kubelet/plugins/csi.xenorchestra.vates.tech/csi.sock
                        -------------------------
                         volumes:
                                - hostPath:
                                    path: /var/lib/kubelet/plugins/csi.xenorchestra.vates.tech
                                    type: DirectoryOrCreate
                                  name: socket-dir
                        

                        On the control-plane:

                        [root@xxxx kubelet]# pwd
                        /var/lib/kubelet
                        [root@xxxx  kubelet]# tree plugins
                        plugins
                        └── csi.xenorchestra.vates.tech
                        
                         kgp -nkube-system | grep csi
                        csi-xenorchestra-controller-748db9b45b-w4zk4             2/3     ImagePullBackOff   19 (12s ago)     41m
                        csi-xenorchestra-node-6zzv8                              1/3     CrashLoopBackOff   11 (3m51s ago)   41m
                        csi-xenorchestra-node-8r4ml                              1/3     CrashLoopBackOff   11 (3m59s ago)   41m
                        csi-xenorchestra-node-btrsb                              1/3     CrashLoopBackOff   11 (4m11s ago)   41m
                        csi-xenorchestra-node-w69pc                              1/3     CrashLoopBackOff   11 (4m3s ago)    41m
                        

                        Excerpt from /var/log/messages:

                        Feb 18 22:21:44 xxx kubelet[50541]: I0218 22:21:44.474317   50541 scope.go:117] "RemoveContainer" containerID="26d29856a551fe7dfd873a3f8124584d400d1a88d77cdb4c1797a9726fa85408"
                        Feb 18 22:21:44 xxx crio[734]: time="2026-02-18 22:21:44.475900036-05:00" level=info msg="Checking image status: ghcr.io/vatesfr/xenorchestra-csi-driver:edge" id=308f8922-453b-481f-804d-3d85b489b933 name=/runtime.v1.ImageService/ImageStatus
                        Feb 18 22:21:44 xxx crio[734]: time="2026-02-18 22:21:44.476149865-05:00" level=info msg="Image ghcr.io/vatesfr/xenorchestra-csi-driver:edge not found" id=308f8922-453b-481f-804d-3d85b489b933 name=/runtime.v1.ImageService/ImageStatus
                        Feb 18 22:21:44 xxx crio[734]: time="2026-02-18 22:21:44.476188202-05:00" level=info msg="Image ghcr.io/vatesfr/xenorchestra-csi-driver:edge not found" id=308f8922-453b-481f-804d-3d85b489b933 name=/runtime.v1.ImageService/ImageStatus
                        Feb 18 22:21:44 xxx kubelet[50541]: E0218 22:21:44.476862   50541 pod_workers.go:1298] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"node-driver-registrar\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-driver-registrar pod=csi-xenorchestra-node-btrsb_kube-system(433e69c9-2da9-4e23-b92b-90918bd36248)\", failed to \"StartContainer\" for \"xenorchestra-csi-driver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/vatesfr/xenorchestra-csi-driver:edge\\\"\"]" pod="kube-system/csi-xenorchestra-node-btrsb" podUID="433e69c9-2da9-4e23-b92b-90918bd36248"
                        

                        Any other suggestions in the meantime or if I can collect more information, let me know.

                        1 Reply Last reply Reply Quote 0
                        • First post
                          Last post