kubeadm + calico 3.6 单节点永远未准备好

kubeadm + calico 3.6 singlenode notready forever

我在 Ubuntu 的存储库 (1.13.4) 和 calico 3.6 上使用 Ubuntu bionic (18.04) 和最新版本的 kubeadm,遵循 "Installing with the Kubernetes API datastore—50 nodes or less" 的文档( https://docs.projectcalico.org/v3.6/getting-started/kubernetes/installation/calico).

开始于:

sudo kubeadm init --pod-network-cidr=192.168.0.0/16

但是当我应用 calico.yaml 时,我的节点遇到了以下情况:

Conditions: Type Status LastHeartbeatTime
LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- MemoryPressure False Mon, 15 Apr 2019 20:24:43 -0300 Mon, 15 Apr 2019 20:21:20 -0300 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Mon, 15 Apr 2019 20:24:43 -0300 Mon, 15 Apr 2019 20:21:20 -0300
KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Mon, 15 Apr 2019 20:24:43 -0300 Mon, 15 Apr 2019 20:21:20 -0300 KubeletHasSufficientPID kubelet has sufficient PID available Ready False Mon, 15 Apr 2019 20:24:43 -0300 Mon, 15 Apr 2019 20:21:20 -0300 KubeletNotReady
runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

当我看到系统 pods (kubectl get pods -n kube-system) 我得到:

NAME                                       READY   STATUS     RESTARTS   AGE
calico-kube-controllers-55df754b5d-zsttg   0/1     Pending    0          34s
calico-node-5n6p2                          0/1     Init:0/2   0          35s
coredns-86c58d9df4-jw7wk                   0/1     Pending    0          99s
coredns-86c58d9df4-sztxw                   0/1     Pending    0          99s
etcd-cherokee                              1/1     Running    0          36s
kube-apiserver-cherokee                    1/1     Running    0          46s
kube-controller-manager-cherokee           1/1     Running    0          59s
kube-proxy-22xwj                           1/1     Running    0          99s
kube-scheduler-cherokee                    1/1     Running    0          44s

这可能是错误还是缺少某些内容?

尝试删除主节点上的污点,kubectl taint nodes --all node-role.kubernetes.io/master-

参考这里,https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/#control-plane-node-isolation