XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    Kubernetes cluster recipes not seeing nodes

    Scheduled Pinned Locked Moved Xen Orchestra
    43 Posts 3 Posters 27.0k Views 3 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • F Offline
      fred974 @GabrielG
      last edited by

      @GabrielG the installation took a very long time so I left it. When I came back only the master was up and running. The nodes were down. I powered them up manually

      F 1 Reply Last reply Reply Quote 0
      • F Offline
        fred974 @fred974
        last edited by

        @GabrielG Do you have any suggestion on how to fix the cluster?

        G 1 Reply Last reply Reply Quote 0
        • G Offline
          GabrielG @fred974
          last edited by

          It's hard to say without knowing what went wrong during the installation.

          First, I would say to check if the config file /home/debian/.kube/config is the same as /etc/kubernetes/admin.conf and if debian is correctly assign as the owner of the file.

          F 1 Reply Last reply Reply Quote 0
          • F Offline
            fred974 @GabrielG
            last edited by

            @GabrielG the file content are identical but the file ownership is different. admin.conf is owned by root and not 'debian'. Should it be debian?

            debian@master:~/.kube$ pwd
            /home/debian/.kube
            
            debian@master:~/.kube$ ls -la
            total 20
            drwxr-xr-x 3 root   root   4096 Mar 21 13:36 .
            drwxr-xr-x 4 debian debian 4096 Mar 21 13:36 ..
            drwxr-x--- 4 root   root   4096 Mar 21 13:36 cache
            -rw------- 1 debian debian 5638 Mar 21 13:36 config
            
            debian@master:/etc/kubernetes$ pwd
            /etc/kubernetes
            debian@master:/etc/kubernetes$ ls -la
            total 44
            drwxr-xr-x  4 root root 4096 Mar 21 13:36 .
            drwxr-xr-x 77 root root 4096 Mar 27 04:07 ..
            -rw-------  1 root root 5638 Mar 21 13:36 admin.conf
            -rw-------  1 root root 5674 Mar 21 13:36 controller-manager.conf
            -rw-------  1 root root 1962 Mar 21 13:36 kubelet.conf
            drwxr-xr-x  2 root root 4096 Mar 21 13:36 manifests
            drwxr-xr-x  3 root root 4096 Mar 21 13:36 pki
            -rw-------  1 root root 5622 Mar 21 13:36 scheduler.conf
            
            G 1 Reply Last reply Reply Quote 0
            • G Offline
              GabrielG @fred974
              last edited by

              @fred974 said in Kubernetes cluster recipes not seeing nodes:

              Should it be debian?

              No, only the /home/debian/.kube/config is meant to be owned by debian user.

              Are you using kubectl with debian user or with the root user?

              F 1 Reply Last reply Reply Quote 0
              • F Offline
                fred974 @GabrielG
                last edited by

                @GabrielG said in Kubernetes cluster recipes not seeing nodes:

                Are you using kubectl with debian user or with the root user?

                I was using the root account 😞 I tried with the debian user and I now get something

                debian@master:~$ kubectl get nodes
                NAME     STATUS   ROLES           AGE     VERSION
                master   Ready    control-plane   5d23h   v1.26.3
                node-2   Ready    <none>          5d23h   v1.26.3
                

                I have created a cluster with 1x master and 3x nodes. Should the output of the command above return 2 nodes?

                G 1 Reply Last reply Reply Quote 0
                • G Offline
                  GabrielG @fred974
                  last edited by

                  Yes, you should have something like that:

                  debian@master:~$ kubectl get nodes
                  NAME     STATUS   ROLES           AGE     VERSION
                  master   Ready    control-plane   6m52s   v1.26.3
                  node-1   Ready    <none>          115s    v1.26.3
                  node-2   Ready    <none>          2m47s   v1.26.3
                  node-3   Ready    <none>          2m36s   v1.26.3
                  

                  Are all worker nodes vm started? What's the output of kubectl get events?

                  F 1 Reply Last reply Reply Quote 0
                  • F Offline
                    fred974 @GabrielG
                    last edited by

                    @GabrielG Sorry for the late reply. Here is what I have.

                    debian@master:~$ kubectl get nodes
                    NAME     STATUS   ROLES           AGE     VERSION
                    master   Ready    control-plane   7d22h   v1.26.3
                    node-2   Ready    <none>          7d22h   v1.26.3
                    

                    and

                    debian@master:~$ kubectl get events
                    No resources found in default namespace.
                    
                    G 1 Reply Last reply Reply Quote 0
                    • G Offline
                      GabrielG @fred974
                      last edited by

                      Thank you.

                      Are all VMs started?

                      What's the output of kubectl get pods --all-namespaces?

                      F 1 Reply Last reply Reply Quote 0
                      • F Offline
                        fred974 @GabrielG
                        last edited by

                        @GabrielG said in Kubernetes cluster recipes not seeing nodes:

                        Are all VMs started?

                        Yes, all the VMs are up and running
                        8784225f-6d2d-4296-b3be-081c340c06a7-image.png

                        @GabrielG said in Kubernetes cluster recipes not seeing nodes:

                        What's the output of kubectl get pods --all-namespaces?

                        debian@master:~$ kubectl get pods --all-namespaces
                        NAMESPACE      NAME                             READY   STATUS    RESTARTS        AGE
                        kube-flannel   kube-flannel-ds-mj4n6            1/1     Running   2 (3d ago)      8d
                        kube-flannel   kube-flannel-ds-vtd2k            1/1     Running   2 (6d19h ago)   8d
                        kube-system    coredns-787d4945fb-85867         1/1     Running   2 (6d19h ago)   8d
                        kube-system    coredns-787d4945fb-dn96g         1/1     Running   2 (6d19h ago)   8d
                        kube-system    etcd-master                      1/1     Running   2 (6d19h ago)   8d
                        kube-system    kube-apiserver-master            1/1     Running   2 (6d19h ago)   8d
                        kube-system    kube-controller-manager-master   1/1     Running   2 (6d19h ago)   8d
                        kube-system    kube-proxy-fmjnv                 1/1     Running   2 (6d19h ago)   8d
                        kube-system    kube-proxy-gxsrs                 1/1     Running   2 (3d ago)      8d
                        kube-system    kube-scheduler-master            1/1     Running   2 (6d19h ago)   8d
                        

                        Thank you very much

                        F 1 Reply Last reply Reply Quote 0
                        • F Offline
                          fred974 @fred974
                          last edited by

                          @GabrielG Do you think I should delete all the VMs and reun the deploy recipe again? Also is it normal that I no longer have the option to set a network CIDR like before?

                          G 1 Reply Last reply Reply Quote 0
                          • G Offline
                            GabrielG @fred974
                            last edited by

                            You can do that but it won't help us to understand what when wrong during the installation of the worker nodes 1 and 3.

                            Can you show me what's the output of sudo cat /var/log/messages for each nodes (master and workers)?

                            Concerning the CIDR, we are now using flannel as Container Network Interface, which uses a default CIDR (10.244.0.0/16) allocated to the pods network.

                            F 1 Reply Last reply Reply Quote 0
                            • F Offline
                              fred974 @GabrielG
                              last edited by

                              @GabrielG said in Kubernetes cluster recipes not seeing nodes:

                              Can you show me what's the output of sudo cat /var/log/messages for each nodes (master and workers)?

                              From the master:

                              debian@master:~$ sudo cat /var/log/messages
                              Mar 26 00:10:18 master rsyslogd: [origin software="rsyslogd" swVersion="8.2102.0" x-pid="572" x-info="https://www.rsyslog.com"] rsyslogd was HUPed
                              

                              From node1:
                              https://pastebin.com/xrqPd88V

                              From node2:
                              https://pastebin.com/aJch3diH

                              From node3:
                              https://pastebin.com/Zc1y42NA

                              G 1 Reply Last reply Reply Quote 0
                              • G Offline
                                GabrielG @fred974
                                last edited by

                                Thank you, I'll take a look tomorrow.

                                Is it the whole output for the master?

                                F 1 Reply Last reply Reply Quote 0
                                • F Offline
                                  fred974 @GabrielG
                                  last edited by

                                  @GabrielG yes, all of it

                                  F 1 Reply Last reply Reply Quote 0
                                  • F Offline
                                    fred974 @fred974
                                    last edited by

                                    @GabrielG did you get a chance to look at the log I provided? Any clues?

                                    G 1 Reply Last reply Reply Quote 0
                                    • G Offline
                                      GabrielG @fred974
                                      last edited by

                                      Hi,

                                      Nothing useful. Maybe you can try to delete the VMs and redeploy the cluster.

                                      F 1 Reply Last reply Reply Quote 0
                                      • F Offline
                                        fred974 @GabrielG
                                        last edited by

                                        @GabrielG said in Kubernetes cluster recipes not seeing nodes:

                                        Nothing useful. Maybe you can try to delete the VMs and redeploy the cluster.

                                        Ok I will do that. Whilst I redeploy the cluster, what I am looking for? What log to monitor etc?

                                        G F 2 Replies Last reply Reply Quote 0
                                        • G Offline
                                          GabrielG @fred974
                                          last edited by

                                          I'd say any error in the console during the cloud-init installation.

                                          1 Reply Last reply Reply Quote 0
                                          • F Offline
                                            fred974 @fred974
                                            last edited by

                                            @GabrielG I deleted the VMs and redeployed it with3 nodes.
                                            So far only the Master VM has been created and nothing else. I am missing the 3x nodes.
                                            When I look at the console of the master VM, all I get is this:

                                            e59f3920-dbb2-473f-b5c2-ba793197a7b4-image.png

                                            So the master VM is created but nothing has been deployed

                                            I have no error on Xen Orchestra screen or log

                                            G 1 Reply Last reply Reply Quote 0

                                            Hello! It looks like you're interested in this conversation, but you don't have an account yet.

                                            Getting fed up of having to scroll through the same posts each visit? When you register for an account, you'll always come back to exactly where you were before, and choose to be notified of new replies (either via email, or push notification). You'll also be able to save bookmarks and upvote posts to show your appreciation to other community members.

                                            With your input, this post could be even better 💗

                                            Register Login
                                            • First post
                                              Last post