XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    Kubernetes Recipe - Basic setup not working

    Scheduled Pinned Locked Moved Xen Orchestra
    16 Posts 4 Posters 2.5k Views 1 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • olivierlambertO Offline
      olivierlambert Vates πŸͺ Co-Founder CEO
      last edited by

      Ping @BenjiReis

      1 Reply Last reply Reply Quote 0
      • BenjiReisB Offline
        BenjiReis Vates πŸͺ XCP-ng Team
        last edited by

        Hello,

        Can the master communicate with the other nodes? What IP ranges did you use for kubernetes CIDR & for your VM IPs?

        Thanks

        S 1 Reply Last reply Reply Quote 0
        • S Offline
          suaro @BenjiReis
          last edited by

          @BenjiReis , yes, all of the nodes could communicate with each other. The CIDR supplied in the template was in a different range which didn't overlap existing ones (i.e. completely different). This morning, I had to move forward so manually deployed a 10 node cluster (< what a pain! ) and had to abandon the template for now. I do plan to try this again as I'm certain I may have been doing something wrong and it would make my life easier if I can get it working. One thing I noticed this morning is the master was out of disk space and only 4 GB was allocated . Seems pretty low. I'll try again sometime.

          1 Reply Last reply Reply Quote 0
          • BenjiReisB Offline
            BenjiReis Vates πŸͺ XCP-ng Team
            last edited by

            Ok, I'll try on my end to see if I encounter a similar error. Let me know how your next try goes.

            Regards

            1 Reply Last reply Reply Quote 0
            • R Offline
              ralphsmeets
              last edited by

              I have exactly the same problem, tried different cidr's, changing it manually in the kube-controller-master and then restarting the kubelet service.

              kubectl get pods -A -o wide
              NAMESPACE     NAME                                   READY   STATUS              RESTARTS   AGE     IP             NODE     NOMINATED NODE   READINESS GATES
              default       kubernetes-bootcamp-57978f5f5d-mx8sd   0/1     Pending             0          6m52s   <none>         <none>   <none>           <none>
              kube-system   coredns-f9fd979d6-8jjbs                0/1     ContainerCreating   0          83m     <none>         node-1   <none>           <none>
              kube-system   coredns-f9fd979d6-mn4d8                0/1     ContainerCreating   0          83m     <none>         node-1   <none>           <none>
              kube-system   etcd-master                            1/1     Running             1          83m     192.168.1.52   master   <none>           <none>
              kube-system   kube-apiserver-master                  1/1     Running             1          83m     192.168.1.52   master   <none>           <none>
              kube-system   kube-controller-manager-master         1/1     Running             0          35m     192.168.1.52   master   <none>           <none>
              kube-system   kube-proxy-84k8x                       1/1     Running             1          79m     192.168.1.55   node-2   <none>           <none>
              kube-system   kube-proxy-f5shp                       1/1     Running             1          79m     192.168.1.53   node-1   <none>           <none>
              kube-system   kube-proxy-qg4bk                       1/1     Running             1          83m     192.168.1.52   master   <none>           <none>
              kube-system   kube-proxy-whcwv                       1/1     Running             1          79m     192.168.1.54   node-3   <none>           <none>
              kube-system   kube-router-7zmtb                      0/1     CrashLoopBackOff    22         79m     192.168.1.55   node-2   <none>           <none>
              kube-system   kube-router-llgqk                      0/1     CrashLoopBackOff    23         79m     192.168.1.53   node-1   <none>           <none>
              kube-system   kube-router-q4m5d                      0/1     CrashLoopBackOff    22         79m     192.168.1.54   node-3   <none>           <none>
              kube-system   kube-router-xs696                      0/1     CrashLoopBackOff    33         83m     192.168.1.52   master   <none>           <none>
              kube-system   kube-scheduler-master                  1/1     Running             1          83m     192.168.1.52   master   <none>           <none>
              
              debian@master:~$ kubectl -n kube-system logs -f kube-router-7zmtb
              I1015 09:06:02.910389       1 kube-router.go:231] Running /usr/local/bin/kube-router version v1.1.0-dirty, built on 2020-10-02T22:14:14+0000, go1.13.13
              F1015 09:06:03.015240       1 network_routes_controller.go:1060] Failed to get pod CIDR from node spec. kube-router relies on kube-controller-manager to allocate pod CIDR for the node or an annotation `kube-router.io/pod-cidr`. Error: node.Spec.PodCIDR not set for node: node-2
              
              debian@master:~$ kubectl get nodes -o jsonpath='{.items[*].spec.podCIDR}' -A
              
              debian@master:~$ kubectl cluster-info dump -o yaml | grep -i cidr | grep \\\-\\\-
                    - --allocate-node-cidrs=true
                    - --cluster-cidr=10.96.0.0/22
                    - --node-cidr-mask-size=25
              
              debian@master:~$ kubectl logs pod/kube-controller-manager-master -n kube-system
              E1015 08:35:32.193635       1 controller_utils.go:248] Error while processing Node Add: failed to allocate cidr from cluster cidr at idx:0: CIDR allocation failed; there are no remaining CIDRs left to allocate in the accepted range
              
              
              1 Reply Last reply Reply Quote 0
              • BenjiReisB Offline
                BenjiReis Vates πŸͺ XCP-ng Team
                last edited by

                Did the manual change solve the issue?

                R 1 Reply Last reply Reply Quote 0
                • R Offline
                  ralphsmeets @BenjiReis
                  last edited by

                  @BenjiReis
                  No, the manual change didn't solve the problem 😞

                  1 Reply Last reply Reply Quote 0
                  • BenjiReisB Offline
                    BenjiReis Vates πŸͺ XCP-ng Team
                    last edited by

                    Did you make sure /proc/sys/net/bridge/bridge-nf-call-iptables is set to 1?

                    Our implementation uses kube-router wich requirest this setting.

                    R 1 Reply Last reply Reply Quote 0
                    • R Offline
                      ralphsmeets @BenjiReis
                      last edited by

                      @BenjiReis said in Kubernetes Recipe - Basic setup not working:

                      /proc/sys/net/bridge/bridge-nf-call-iptables

                      Just checked:

                      debian@master:~$ more /proc/sys/net/bridge/bridge-nf-call-iptables
                      1
                      
                      1 Reply Last reply Reply Quote 0
                      • BenjiReisB Offline
                        BenjiReis Vates πŸͺ XCP-ng Team
                        last edited by

                        Ok thanks, then I don't understant why there's a pb unfortunately...

                        The recipe follow this doc to create the cluster: https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/

                        @suaro how does your manual install differs from the doc used by the recipe? Perhaps we can dig there.

                        1 Reply Last reply Reply Quote 0
                        • BenjiReisB Offline
                          BenjiReis Vates πŸͺ XCP-ng Team
                          last edited by

                          And the pod network is made with kube-router: https://github.com/cloudnativelabs/kube-router/blob/master/docs/kubeadm.md

                          1 Reply Last reply Reply Quote 0
                          • R Offline
                            ralphsmeets
                            last edited by

                            I'm going to have a look into it!
                            Thanks and hopefully I'll find the cullpritt, so the recipe can be updated with some nice new ingrediΓ«nts πŸ˜‰

                            1 Reply Last reply Reply Quote 1
                            • BenjiReisB Offline
                              BenjiReis Vates πŸͺ XCP-ng Team
                              last edited by

                              Thanks

                              I'll try to investigate myself as well when i can, do not hesitate to come back here if you find anything. πŸ™‚

                              1 Reply Last reply Reply Quote 0
                              • R Offline
                                ralphsmeets
                                last edited by

                                I got it working. Seems like the podCIDR wasn't set. Setting it manually by patching the nodes worked for me:

                                for node in master node-1 node-2 node-3 do 
                                  kubectl patch node $node -p '{"spec":{"podCIDR":"10.96.0.0/12"}}'
                                done
                                

                                Not sure if this is a problem with the recipe, or if this is a bug in kube-router/kube-controller-manager. Anyway, If have my cluster up and running now!!!

                                1 Reply Last reply Reply Quote 1
                                • R Offline
                                  ralphsmeets
                                  last edited by ralphsmeets

                                  It seems that Debian Buster has some problems with Kubernetes. While this base setup is working, one should also assure that every tool uses the legacy iptables. If not, pod's will not be able to reach the kubernetes api... And then... failure all over!
                                  So we also need:

                                  update-alternatives --set iptables /usr/sbin/iptables-legacy
                                  update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy
                                  update-alternatives --set arptables /usr/sbin/arptables-legacy
                                  update-alternatives --set ebtables /usr/sbin/ebtables-legacy
                                  
                                  ```
                                  1 Reply Last reply Reply Quote 1
                                  • First post
                                    Last post