XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    Kubernetes in production?

    Scheduled Pinned Locked Moved Development
    4 Posts 2 Posters 1.0k Views 3 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • T Offline
      thepro101
      last edited by thepro101

      We are planning to move from VMs to Kubernetes for our websites. We are currently testing the hub feature in XOA, but need more information about the best way to leverage our current resources for kubernetes.

      We have a highly available TrueNAS all flash m40 SAN that we currently use NFS shares hosted on the TrueNAS for storage SR's in xcp-ng.

      Is there any guidance on how to configure StorageClasses or LoadBalancer Services for production?
      I'm a little lost on storage, and we're looking at an IngressController exposed as a NodePort Service on 80/443 for now.

      Also, it would be neat to have an IaC workflow (i.e. ekstcl) instead of a dialog. It might look something like this:

      ---
      apiVersion: k8s.xcp-ng.org/v1alpha1
      kind: KubernetesCluster
      metadata:
        name: my-cluster
        labels:
          k8s.xcp-ng.org/cluster-name: my-cluster
      spec:
        clusterVIP: 10.0.1.10
        gatewayIP: 10.0.1.1
        nameservers:
          - ip: 10.0.1.1
          - ip: 8.8.8.8
        searchDomains:
          - dev
        controlPlane:
          - name: control-plane-1
            nodeIP: 10.0.1.11
            subnetMask: 255.255.255.0
          - name: control-plane-2
            nodeIP: 10.0.1.12
            subnetMask: 255.255.255.0
          - name: control-plane-3
            nodeIP: 10.0.1.13
            subnetMask: 255.255.255.0
          
        nodeGroups:
          bootstrap: # bootstrap/system nodes
            resources:
              cpu: 2
              mem: 4096
            labels:
              k8s.xcp-ng.org/node-group: system
            nodes:
            - name: worker-node-1
              nodeIP: '10.0.1.101/24'
            - name: worker-node-2
              nodeIP: '10.0.1.102/24'
            - name: worker-node-3
              nodeIP: '10.0.100.103/24'
      
          app_cluster:
            # e.g., a clustered app with well-aligned, isolated resources
            minInstances: 3
            maxInstances: 15
            resources:
              cpu: 4
              mem: 8192
            network:
              name: my-app-network
              addressRange: 10.0.100.32/28
            nodeTemplate:
              labels:
                k8s.xcp-ng.org/node-group: my-clustered-app
      
          memoryBound:
            # (e.g.) VMs configured to scale with higher memory ratios
            resources:
              cpu: 2
              mem: 16384
            labels:
              k8s.xcp-ng.org/node-group: memory
      
          computeBound:
            # (e.g.) VMs configured to scale with higher processor limits
            resources:
              cpu: 4
              mem: 8192
            labels:
              k8s.xcp-ng.org/node-group: compute
      
      1 Reply Last reply Reply Quote 0
      • R Offline
        rfx77
        last edited by

        Hi!

        You can user the nfs provisioner: https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner

        It gives you persistant-volumes as qcow2 images in a nfs share. this is a good solution if you have a high-available nfs storage.

        k8s really can be very problematic so i would recommend to use as much default solutions and as few as possible vendor-solutions like csi-drivers. we had a really bad time with the vmware-csi-driver for example.

        if you would like a capable k8s distribution i would recommend rke2 (from rancher) it is free and easy to install.

        as loadbalancer we used metal-lb which is quite standard and very easy to configure.

        if you are running the ingress-controller as node-port-service you must have an external load-balancer

        T 1 Reply Last reply Reply Quote 0
        • T Offline
          thepro101 @rfx77
          last edited by

          @rfx77 Great information!

          Considering we have a full HA Truenas SAN, we are most likely going to use https://github.com/democratic-csi/democratic-csi/blob/master/examples/freenas-nfs.yaml

          If we go with a k8s distribution like rke2, is there xcp-ng guest tools support and if so when during the process would we install the tools on the nodes?

          R 1 Reply Last reply Reply Quote 0
          • R Offline
            rfx77 @thepro101
            last edited by

            @thepro101

            i would install the guest-tools right from the start after base install of the os.

            create min 3 nodes, install them as you like with guest-tools and then install the first rke2-node. read the docs how to config metallb on rke2. you have to do a small config tweak in the nginx ingress controller config

            then add the other nodes.

            to manage rke2 i would use openlens.

            you can use democtratic-csi but be aware that you completely trust truenas and this opensource-project for your data. i dont think that i would go this route when i am not expirienced with k8s.

            i would use the nfs-provisioner and when everything works fine and you have a solid csi-enabled backup you can add democratic-csi in the mix.

            for us, backup and restore was the biggest problem. in theory everything seebs easy with k10 or velero but if you completely shoot your cluster you will have a very hard time.

            to be honest, after 6 month and some installations we gave up on k8s and migrated our customers and our internal IT to a setup where we use openSuse MicroOS VMs for every docker-compose project. We now have approximately the same amount of VMs which we had as namespace but with the benefit of complete control over resources with very little overhead. And we have the benefit of an optimal backup and restore.

            K8S bite me quite a bit too often 😉

            1 Reply Last reply Reply Quote 0
            • First post
              Last post