I have exactly the same problem, tried different cidr's, changing it manually in the kube-controller-master and then restarting the kubelet service.
kubectl get pods -A -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
default kubernetes-bootcamp-57978f5f5d-mx8sd 0/1 Pending 0 6m52s <none> <none> <none> <none>
kube-system coredns-f9fd979d6-8jjbs 0/1 ContainerCreating 0 83m <none> node-1 <none> <none>
kube-system coredns-f9fd979d6-mn4d8 0/1 ContainerCreating 0 83m <none> node-1 <none> <none>
kube-system etcd-master 1/1 Running 1 83m 192.168.1.52 master <none> <none>
kube-system kube-apiserver-master 1/1 Running 1 83m 192.168.1.52 master <none> <none>
kube-system kube-controller-manager-master 1/1 Running 0 35m 192.168.1.52 master <none> <none>
kube-system kube-proxy-84k8x 1/1 Running 1 79m 192.168.1.55 node-2 <none> <none>
kube-system kube-proxy-f5shp 1/1 Running 1 79m 192.168.1.53 node-1 <none> <none>
kube-system kube-proxy-qg4bk 1/1 Running 1 83m 192.168.1.52 master <none> <none>
kube-system kube-proxy-whcwv 1/1 Running 1 79m 192.168.1.54 node-3 <none> <none>
kube-system kube-router-7zmtb 0/1 CrashLoopBackOff 22 79m 192.168.1.55 node-2 <none> <none>
kube-system kube-router-llgqk 0/1 CrashLoopBackOff 23 79m 192.168.1.53 node-1 <none> <none>
kube-system kube-router-q4m5d 0/1 CrashLoopBackOff 22 79m 192.168.1.54 node-3 <none> <none>
kube-system kube-router-xs696 0/1 CrashLoopBackOff 33 83m 192.168.1.52 master <none> <none>
kube-system kube-scheduler-master 1/1 Running 1 83m 192.168.1.52 master <none> <none>
debian@master:~$ kubectl -n kube-system logs -f kube-router-7zmtb
I1015 09:06:02.910389 1 kube-router.go:231] Running /usr/local/bin/kube-router version v1.1.0-dirty, built on 2020-10-02T22:14:14+0000, go1.13.13
F1015 09:06:03.015240 1 network_routes_controller.go:1060] Failed to get pod CIDR from node spec. kube-router relies on kube-controller-manager to allocate pod CIDR for the node or an annotation `kube-router.io/pod-cidr`. Error: node.Spec.PodCIDR not set for node: node-2
debian@master:~$ kubectl get nodes -o jsonpath='{.items[*].spec.podCIDR}' -A
debian@master:~$ kubectl cluster-info dump -o yaml | grep -i cidr | grep \\\-\\\-
- --allocate-node-cidrs=true
- --cluster-cidr=10.96.0.0/22
- --node-cidr-mask-size=25
debian@master:~$ kubectl logs pod/kube-controller-manager-master -n kube-system
E1015 08:35:32.193635 1 controller_utils.go:248] Error while processing Node Add: failed to allocate cidr from cluster cidr at idx:0: CIDR allocation failed; there are no remaining CIDRs left to allocate in the accepted range