Kubelet Master 因 cni missing 而停留在 KubeletNotReady

Kubelet Master stays in KubeletNotReady because of cni missing

Kubelet 已使用 Calico 的 pod 网络初始化:

sudo kubeadm init --pod-network-cidr=192.168.0.0/16 --image-repository=someserver

然后我得到 calico.yaml v3.11 并应用它:

sudo kubectl --kubeconfig="/etc/kubernetes/admin.conf" apply -f calico.yaml

在我检查 pod 状态之后:

sudo kubectl --kubeconfig="/etc/kubernetes/admin.conf" get nodes
NAME              STATUS     ROLES    AGE     VERSION
master-1   NotReady   master   7m21s   v1.17.2

描述我已经将 cni 配置单元化,但我认为 calico 应该这样做?

MemoryPressure   False   Fri, 21 Feb 2020 10:14:24 +0100   Fri, 21 Feb 2020 10:09:00 +0100   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Fri, 21 Feb 2020 10:14:24 +0100   Fri, 21 Feb 2020 10:09:00 +0100   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Fri, 21 Feb 2020 10:14:24 +0100   Fri, 21 Feb 2020 10:09:00 +0100   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            False   Fri, 21 Feb 2020 10:14:24 +0100   Fri, 21 Feb 2020 10:09:00 +0100   KubeletNotReady              runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

事实上我在 /etc/cni/net.d/ 下什么都没有所以它好像忘记了什么?

ll /etc/cni/net.d/
total 0
sudo kubectl --kubeconfig="/etc/kubernetes/admin.conf" -n kube-system get pods
    NAME                                       READY   STATUS                  RESTARTS   AGE
calico-kube-controllers-5644fb7cf6-f7lqq   0/1     Pending                 0          3h
calico-node-f4xzh                          0/1     Init:ImagePullBackOff   0          3h
coredns-7fb8cdf968-bbqbz                   0/1     Pending                 0          3h24m
coredns-7fb8cdf968-vdnzx                   0/1     Pending                 0          3h24m
etcd-master-1                       1/1     Running                 0          3h24m
kube-apiserver-master-1            1/1     Running                 0          3h24m
kube-controller-manager-master-1    1/1     Running                 0          3h24m
kube-proxy-9m879                           1/1     Running                 0          3h24m
kube-scheduler-master-1             1/1     Running                 0          3h24m

如前所述,我 运行 通过本地回购和 journalctl 说:

 kubelet[21935]: E0225 14:30:54.830683   21935 pod_workers.go:191] Error syncing pod cec2f72b-844a-4d6b-8606-3aff06d4a36d ("calico-node-f4xzh_kube-system(cec2f72b-844a-4d6b-8606-3aff06d4a36d)"), skipping: failed to "StartContainer" for "upgrade-ipam" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get https://repo:10000/v2/calico/cni/manifests/v3.11.2: no basic auth credentials"
 kubelet[21935]: E0225 14:30:56.008989   21935 kubelet.go:2183] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

感觉这不仅仅是 CNI 的问题

在 calico pods 成功 运行 并且 CNI 设置正确之前,核心 DNS pod 将处于待定状态,master 将处于 NotReady 状态。

从 docker.io 下载 calico docker 图像似乎是网络问题。因此,您可以从 docker.io 中提取 calico 图像并将其推送到您的内部容器注册表,然后修改 calico yaml 以在 calico.yaml 的图像部分引用该注册表,最后将修改后的 calico yaml 应用到 kubernetes群集。

所以 Init:ImagePullBackOff 的问题是它无法自动应用我私人存储库中的图像。我不得不从 docker 中提取 calico 的所有图像。然后我删除了 calico pod,它用新推送的图像重新创建了自己

sudo docker pull private-repo/calico/pod2daemon-flexvol:v3.11.2
sudo docker pull private-repo/calico/node:v3.11.2
sudo docker pull private-repo/calico/cni:v3.11.2
sudo docker pull private-repo/calico/kube-controllers:v3.11.2

sudo kubectl -n kube-system delete po/calico-node-y7g5

之后节点重新执行所有初始化阶段和:

sudo kubectl get pods -n kube-system
NAME                                       READY   STATUS    RESTARTS   AGE
calico-kube-controllers-5644fb7cf6-qkf47   1/1     Running   0          11s
calico-node-mkcsr                          1/1     Running   0          21m
coredns-7fb8cdf968-bgqvj                   1/1     Running   0          37m
coredns-7fb8cdf968-v85jx                   1/1     Running   0          37m
etcd-lin-1k8w1dv-vmh                       1/1     Running   0          38m
kube-apiserver-lin-1k8w1dv-vmh             1/1     Running   0          38m
kube-controller-manager-lin-1k8w1dv-vmh    1/1     Running   0          38m
kube-proxy-9hkns                           1/1     Running   0          37m
kube-scheduler-lin-1k8w1dv-vmh             1/1     Running   0          38m