XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    Xen Orchestra Container Storage Interface (CSI) for Kubernetes

    Scheduled Pinned Locked Moved Infrastructure as Code
    13 Posts 6 Posters 1.0k Views 5 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • bvitnikB Offline
      bvitnik @olivierlambert
      last edited by

      @olivierlambert That's all fine and understandable but my question is more on the technical side of things... and still not answered ๐Ÿ™‚

      1 Reply Last reply Reply Quote 0
      • olivierlambertO Online
        olivierlambert Vates ๐Ÿช Co-Founder CEO
        last edited by

        I'm not sure to get the question then ๐Ÿค” It's not a technical requirement, it's a design decision.

        1 Reply Last reply Reply Quote 0
        • CyrilleC Offline
          Cyrille Vates ๐Ÿช DevOps Team @bvitnik
          last edited by

          @bvitnik As Olivier said, it's more of a design decision than a technical requirement. The idea behind using XO is to have a single point of entry, regardless of the number of pools, etc.

          For example, this allows the mapping of Kubernetes regions to Xen Orchestra pools and Kubernetes zones to Xen Orchestra hosts with a single entry point and credentials.

          bvitnikB 1 Reply Last reply Reply Quote 0
          • bvitnikB Offline
            bvitnik @Cyrille
            last edited by

            @Cyrille My concern is that you are closing the door for people that do not need (or do not want) XO in their stack. Maybe they are using other ways to manage the stack, possibly custom developed, and XO would just be one more point of failure, another security concern etc.

            From what I can gather, XO effectively acts as an API proxy here, plus as a list of pools. That's a rather insignificant (and forced?) role, from a technical point of view, considering XO has much much more functionality outside of what XCP-ng and XAPI offer themselves. All of that unused and not required for this integration.

            CyrilleC 1 Reply Last reply Reply Quote 0
            • CyrilleC Offline
              Cyrille Vates ๐Ÿช DevOps Team @bvitnik
              last edited by

              Actually, it's not a closed door; it's more a door that is opening for people who are already using both Xen Orchestra and Kubernetes.๐Ÿค”

              From a technical point of view, it makes more sense for us to use XO, because its API is easier to use, especially with the new REST API. For the application side itself, it does many thing that we don't have to deal with. For VDIs, perhaps it's not so much. But for other things such as backups, live migrations, templates and VM creation... it's easier. Moreover, using a unique SDK to develop tools makes sense for our small DevOps team in terms of development speed, stability and security.

              1 Reply Last reply Reply Quote 1
              • olivierlambertO Online
                olivierlambert Vates ๐Ÿช Co-Founder CEO
                last edited by olivierlambert

                Again, XCP-ng and Xen Orchestra are really meant to work together: thatโ€™s by design. Our goal is to offer a unified stack with one consistent REST API to manage everything, across any number of pools.

                XO already handles a ton of things: auth (with oidc, SAML etc.), multi-pool aggregation, RBAC/ACLs, task tracking, templates, backups, live migrations, etc. By building on top of XO, we can focus on adding real value instead of re-implementing all that logic again in any 3rd party program we maintain in full open source and for free.

                And honestly, I donโ€™t see any issue relying on XO: everything is fully open source, and all features are available for free from the sources, just like itโ€™s always been. Nobodyโ€™s forcing you to use one or the other: if youโ€™d rather build directly on XAPI, you absolutely can.

                1 Reply Last reply Reply Quote 2
                • jmaraJ Offline
                  jmara
                  last edited by

                  Great stuff, looking forward and will try the CSI in the next couple of weeks ๐Ÿ™‚

                  @bvitnik There is an old CSI for xcp-ng (5 years old) which directly talks to the Xen-API, but I'd rather have a middleware which as @olivierlambert already stated has ACL's and security build-in.
                  Eventually you will end up in a broken xen cluster because you have a k8s node with cluster wide privileges to the XenAPI.

                  @olivierlambert Is there any loose roadmap for the CSI? ๐Ÿ™‚

                  Cheers,
                  Jan

                  1 Reply Last reply Reply Quote 0
                  • olivierlambertO Online
                    olivierlambert Vates ๐Ÿช Co-Founder CEO
                    last edited by

                    The roadmap depends a lot on the feedback we have on it ๐Ÿ˜‰ More demand/popular, faster we'll implement stuff ๐Ÿ™‚

                    1 Reply Last reply Reply Quote 0
                    • T Offline
                      ThasianXi
                      last edited by

                      My validation of this was not successful; I used the Quick Start PoC.
                      Pods eventually went into CrashLoopBackOff after ErrImagePull and ImagePullBackOff.
                      I created a GitHub token with these permissions: public_repo, read:packages. I also used a token with more permissions (although that was futile) however, I figured at least it required the aforementioned ones.

                      I have since uninstalled via the script but captured the following events from the controller and one of the node pods:

                      kgp -nkube-system | grep csi*
                      
                      csi-xenorchestra-controller-748db9b45b-z26h6             1/3     CrashLoopBackOff   31 (2m31s ago)   77m
                      csi-xenorchestra-node-4jw9z                              1/3     CrashLoopBackOff   18 (42s ago)     77m
                      csi-xenorchestra-node-7wcld                              1/3     CrashLoopBackOff   18 (58s ago)     77m
                      csi-xenorchestra-node-8jrlq                              1/3     CrashLoopBackOff   18 (34s ago)     77m
                      csi-xenorchestra-node-hqwjj                              1/3     CrashLoopBackOff   18 (50s ago)     77m
                      

                      Pod events:
                      csi-xenorchestra-controller-748db9b45b-z26h6

                      Normal  BackOff  3m48s (x391 over 78m)  kubelet  Back-off pulling image "ghcr.io/vatesfr/xenorchestra-csi-driver:edge"
                      

                      csi-xenorchestra-node-4jw9z

                      Normal   BackOff  14m (x314 over 79m)    kubelet  Back-off pulling image "ghcr.io/vatesfr/xenorchestra-csi-driver:edge"
                      Warning  BackOff  4m21s (x309 over 78m)  kubelet  Back-off restarting failed container node-driver-registrar in pod csi-xenorchestra-node-4jw9z_kube-system(b533c28b-1f28-488a-a31e-862117461964)
                      

                      I can deploy again and capture more information if needed.

                      nathanael-hN 1 Reply Last reply Reply Quote 0
                      • nathanael-hN Offline
                        nathanael-h Vates ๐Ÿช DevOps Team @ThasianXi
                        last edited by

                        @ThasianXi Hello, thanks for the report. It looks like the pull image step fails. Can you test that the token generated from Github is working and allows to pull the image.

                        Maybe a simple test on a docker install could ease the verification:

                        docker login ghcr.io -u USERNAME -p TOKEN
                        docker pull ghcr.io/vatesfr/xenorchestra-csi:v0.0.1
                        

                        Also note that only "Classic" persona access token are supported.

                        More doc here https://docs.github.com/en/packages/working-with-a-github-packages-registry/working-with-the-container-registry#authenticating-with-a-personal-access-token-classic

                        1 Reply Last reply Reply Quote 0
                        • First post
                          Last post