XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Home
    2. ralphsmeets
    R
    Offline
    • Profile
    • Following 0
    • Followers 0
    • Topics 0
    • Posts 6
    • Groups 0

    ralphsmeets

    @ralphsmeets

    3
    Reputation
    1
    Profile views
    6
    Posts
    0
    Followers
    0
    Following
    Joined
    Last Online

    ralphsmeets Unfollow Follow

    Best posts made by ralphsmeets

    • RE: Kubernetes Recipe - Basic setup not working

      It seems that Debian Buster has some problems with Kubernetes. While this base setup is working, one should also assure that every tool uses the legacy iptables. If not, pod's will not be able to reach the kubernetes api... And then... failure all over!
      So we also need:

      update-alternatives --set iptables /usr/sbin/iptables-legacy
      update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy
      update-alternatives --set arptables /usr/sbin/arptables-legacy
      update-alternatives --set ebtables /usr/sbin/ebtables-legacy
      
      ```
      posted in Xen Orchestra
      R
      ralphsmeets
    • RE: Kubernetes Recipe - Basic setup not working

      I got it working. Seems like the podCIDR wasn't set. Setting it manually by patching the nodes worked for me:

      for node in master node-1 node-2 node-3 do 
        kubectl patch node $node -p '{"spec":{"podCIDR":"10.96.0.0/12"}}'
      done
      

      Not sure if this is a problem with the recipe, or if this is a bug in kube-router/kube-controller-manager. Anyway, If have my cluster up and running now!!!

      posted in Xen Orchestra
      R
      ralphsmeets
    • RE: Kubernetes Recipe - Basic setup not working

      I'm going to have a look into it!
      Thanks and hopefully I'll find the cullpritt, so the recipe can be updated with some nice new ingrediënts 😉

      posted in Xen Orchestra
      R
      ralphsmeets

    Latest posts made by ralphsmeets

    • RE: Kubernetes Recipe - Basic setup not working

      It seems that Debian Buster has some problems with Kubernetes. While this base setup is working, one should also assure that every tool uses the legacy iptables. If not, pod's will not be able to reach the kubernetes api... And then... failure all over!
      So we also need:

      update-alternatives --set iptables /usr/sbin/iptables-legacy
      update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy
      update-alternatives --set arptables /usr/sbin/arptables-legacy
      update-alternatives --set ebtables /usr/sbin/ebtables-legacy
      
      ```
      posted in Xen Orchestra
      R
      ralphsmeets
    • RE: Kubernetes Recipe - Basic setup not working

      I got it working. Seems like the podCIDR wasn't set. Setting it manually by patching the nodes worked for me:

      for node in master node-1 node-2 node-3 do 
        kubectl patch node $node -p '{"spec":{"podCIDR":"10.96.0.0/12"}}'
      done
      

      Not sure if this is a problem with the recipe, or if this is a bug in kube-router/kube-controller-manager. Anyway, If have my cluster up and running now!!!

      posted in Xen Orchestra
      R
      ralphsmeets
    • RE: Kubernetes Recipe - Basic setup not working

      I'm going to have a look into it!
      Thanks and hopefully I'll find the cullpritt, so the recipe can be updated with some nice new ingrediënts 😉

      posted in Xen Orchestra
      R
      ralphsmeets
    • RE: Kubernetes Recipe - Basic setup not working

      @BenjiReis said in Kubernetes Recipe - Basic setup not working:

      /proc/sys/net/bridge/bridge-nf-call-iptables

      Just checked:

      debian@master:~$ more /proc/sys/net/bridge/bridge-nf-call-iptables
      1
      
      posted in Xen Orchestra
      R
      ralphsmeets
    • RE: Kubernetes Recipe - Basic setup not working

      @BenjiReis
      No, the manual change didn't solve the problem 😞

      posted in Xen Orchestra
      R
      ralphsmeets
    • RE: Kubernetes Recipe - Basic setup not working

      I have exactly the same problem, tried different cidr's, changing it manually in the kube-controller-master and then restarting the kubelet service.

      kubectl get pods -A -o wide
      NAMESPACE     NAME                                   READY   STATUS              RESTARTS   AGE     IP             NODE     NOMINATED NODE   READINESS GATES
      default       kubernetes-bootcamp-57978f5f5d-mx8sd   0/1     Pending             0          6m52s   <none>         <none>   <none>           <none>
      kube-system   coredns-f9fd979d6-8jjbs                0/1     ContainerCreating   0          83m     <none>         node-1   <none>           <none>
      kube-system   coredns-f9fd979d6-mn4d8                0/1     ContainerCreating   0          83m     <none>         node-1   <none>           <none>
      kube-system   etcd-master                            1/1     Running             1          83m     192.168.1.52   master   <none>           <none>
      kube-system   kube-apiserver-master                  1/1     Running             1          83m     192.168.1.52   master   <none>           <none>
      kube-system   kube-controller-manager-master         1/1     Running             0          35m     192.168.1.52   master   <none>           <none>
      kube-system   kube-proxy-84k8x                       1/1     Running             1          79m     192.168.1.55   node-2   <none>           <none>
      kube-system   kube-proxy-f5shp                       1/1     Running             1          79m     192.168.1.53   node-1   <none>           <none>
      kube-system   kube-proxy-qg4bk                       1/1     Running             1          83m     192.168.1.52   master   <none>           <none>
      kube-system   kube-proxy-whcwv                       1/1     Running             1          79m     192.168.1.54   node-3   <none>           <none>
      kube-system   kube-router-7zmtb                      0/1     CrashLoopBackOff    22         79m     192.168.1.55   node-2   <none>           <none>
      kube-system   kube-router-llgqk                      0/1     CrashLoopBackOff    23         79m     192.168.1.53   node-1   <none>           <none>
      kube-system   kube-router-q4m5d                      0/1     CrashLoopBackOff    22         79m     192.168.1.54   node-3   <none>           <none>
      kube-system   kube-router-xs696                      0/1     CrashLoopBackOff    33         83m     192.168.1.52   master   <none>           <none>
      kube-system   kube-scheduler-master                  1/1     Running             1          83m     192.168.1.52   master   <none>           <none>
      
      debian@master:~$ kubectl -n kube-system logs -f kube-router-7zmtb
      I1015 09:06:02.910389       1 kube-router.go:231] Running /usr/local/bin/kube-router version v1.1.0-dirty, built on 2020-10-02T22:14:14+0000, go1.13.13
      F1015 09:06:03.015240       1 network_routes_controller.go:1060] Failed to get pod CIDR from node spec. kube-router relies on kube-controller-manager to allocate pod CIDR for the node or an annotation `kube-router.io/pod-cidr`. Error: node.Spec.PodCIDR not set for node: node-2
      
      debian@master:~$ kubectl get nodes -o jsonpath='{.items[*].spec.podCIDR}' -A
      
      debian@master:~$ kubectl cluster-info dump -o yaml | grep -i cidr | grep \\\-\\\-
            - --allocate-node-cidrs=true
            - --cluster-cidr=10.96.0.0/22
            - --node-cidr-mask-size=25
      
      debian@master:~$ kubectl logs pod/kube-controller-manager-master -n kube-system
      E1015 08:35:32.193635       1 controller_utils.go:248] Error while processing Node Add: failed to allocate cidr from cluster cidr at idx:0: CIDR allocation failed; there are no remaining CIDRs left to allocate in the accepted range
      
      
      posted in Xen Orchestra
      R
      ralphsmeets