Kubernetes 不显示节点

Kubernetes not showing nodes

我初始化了主节点,并使用 kubeadm 将工作节点加入了集群。根据日志,工作节点已成功加入集群。

但是,当我使用 kubectl get nodes 列出 master 中的节点时,工作节点不存在。怎么了?

[vagrant@localhost ~]$ kubectl get nodes
NAME                    STATUS   ROLES    AGE   VERSION
localhost.localdomain   Ready    master   12m   v1.13.1

这里有 kubeadm 个日志

PLAY[  
   Alusta kubernetes masterit
]********************************************** 

TASK[  
   Gathering Facts
]********************************************************* 
ok:[  
   k8s-n1
]TASK[  
   kubeadm reset
]*********************************************************** 
changed:[  
   k8s-n1
]=>{  
   "changed":true,
   "cmd":"kubeadm reset -f",
   "delta":"0:00:01.078073",
   "end":"2019-01-05 07:06:59.079748",
   "rc":0,
   "start":"2019-01-05 07:06:58.001675",
   "stderr":"",
   "stderr_lines":[  

   ],
   ...
}TASK[  
   kubeadm init
]************************************************************ 
changed:[  
   k8s-n1
]=>{  
   "changed":true,
   "cmd":"kubeadm init --token-ttl=0 --apiserver-advertise-address=10.0.0.101 --pod-network-cidr=20.0.0.0/8",
   "delta":"0:01:05.163377",
   "end":"2019-01-05 07:08:06.229286",
   "rc":0,
   "start":"2019-01-05 07:07:01.065909",
   "stderr":"\t[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.0. Latest validated version: 18.06",
   "stderr_lines":[  
      "\t[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.0. Latest validated version: 18.06"
   ],
   "stdout":"[init] Using Kubernetes version: v1.13.1\n[preflight] Running pre-flight checks\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Activating the kubelet service\n[certs] Using certificateDir folder \"/etc/kubernetes/pki\"\n[certs] Generating \"ca\" certificate and key\n[certs] Generating \"apiserver\" certificate and key\n[certs] apiserver serving cert is signed for DNS names [localhost.localdomain kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.0.0.101]\n[certs] Generating \"apiserver-kubelet-client\" certificate and key\n[certs] Generating \"etcd/ca\" certificate and key\n[certs] Generating \"etcd/server\" certificate and key\n[certs] etcd/server serving cert is signed for DNS names [localhost.localdomain localhost] and IPs [10.0.0.101 127.0.0.1 ::1]\n[certs] Generating \"etcd/healthcheck-client\" certificate and key\n[certs] Generating \"etcd/peer\" certificate and key\n[certs] etcd/peer serving cert is signed for DNS names [localhost.localdomain localhost] and IPs [10.0.0.101 127.0.0.1 ::1]\n[certs] Generating \"apiserver-etcd-client\" certificate and key\n[certs] Generating \"front-proxy-ca\" certificate and key\n[certs] Generating \"front-proxy-client\" certificate and key\n[certs] Generating \"sa\" key and public key\n[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"\n[kubeconfig] Writing \"admin.conf\" kubeconfig file\n[kubeconfig] Writing \"kubelet.conf\" kubeconfig file\n[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file\n[kubeconfig] Writing \"scheduler.conf\" kubeconfig file\n[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"\n[control-plane] Creating static Pod manifest for \"kube-apiserver\"\n[control-plane] Creating static Pod manifest for \"kube-controller-manager\"\n[control-plane] Creating static Pod manifest for \"kube-scheduler\"\n[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\"\n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\". This can take up to 4m0s\n[apiclient] All control plane components are healthy after 19.504023 seconds\n[uploadconfig] storing the configuration used in ConfigMap \"kubeadm-config\" in the \"kube-system\" Namespace\n[kubelet] Creating a ConfigMap \"kubelet-config-1.13\" in namespace kube-system with the configuration for the kubelets in the cluster\n[patchnode] Uploading the CRI Socket information \"/var/run/dockershim.sock\" to the Node API object \"localhost.localdomain\" as an annotation\n[mark-control-plane] Marking the node localhost.localdomain as control-plane by adding the label \"node-role.kubernetes.io/master=''\"\n[mark-control-plane] Marking the node localhost.localdomain as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]\n[bootstrap-token] Using token: orl7dl.vsy5bmmibw7o6cc6\n[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles\n[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials\n[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token\n[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster\n[bootstraptoken] creating the \"cluster-info\" ConfigMap in the \"kube-public\" namespace\n[addons] Applied essential addon: CoreDNS\n[addons] Applied essential addon: kube-proxy\n\nYour Kubernetes master has initialized successfully!\n\nTo start using your cluster, you need to run the following as a regular user:\n\n  mkdir -p $HOME/.kube\n  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config\n  sudo chown $(id -u):$(id -g) $HOME/.kube/config\n\nYou should now deploy a pod network to the cluster.\nRun \"kubectl apply -f [podnetwork].yaml\" with one of the options listed at:\n  https://kubernetes.io/docs/concepts/cluster-administration/addons/\n\nYou can now join any number of machines by running the following on each node\nas root:\n\n  kubeadm join 10.0.0.101:6443 --token orl7dl.vsy5bmmibw7o6cc6 --discovery-token-ca-cert-hash sha256:a38a1b8f98a7695880fff2ce6a45ee90a77807d149c5400cc84af3fcf56fd8a9",
   "stdout_lines":[  
      "[init] Using Kubernetes version: v1.13.1",
      "[preflight] Running pre-flight checks",
      "[preflight] Pulling images required for setting up a Kubernetes cluster",
      "[preflight] This might take a minute or two, depending on the speed of your internet connection",
      "[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'",
      "[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"",
      "[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"",
      "[kubelet-start] Activating the kubelet service",
      "[certs] Using certificateDir folder \"/etc/kubernetes/pki\"",
      "[certs] Generating \"ca\" certificate and key",
      "[certs] Generating \"apiserver\" certificate and key",
      "[certs] apiserver serving cert is signed for DNS names [localhost.localdomain kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.0.0.101]",
      "[certs] Generating \"apiserver-kubelet-client\" certificate and key",
      "[certs] Generating \"etcd/ca\" certificate and key",
      "[certs] Generating \"etcd/server\" certificate and key",
      "[certs] etcd/server serving cert is signed for DNS names [localhost.localdomain localhost] and IPs [10.0.0.101 127.0.0.1 ::1]",
      "[certs] Generating \"etcd/healthcheck-client\" certificate and key",
      "[certs] Generating \"etcd/peer\" certificate and key",
      "[certs] etcd/peer serving cert is signed for DNS names [localhost.localdomain localhost] and IPs [10.0.0.101 127.0.0.1 ::1]",
      "[certs] Generating \"apiserver-etcd-client\" certificate and key",
      "[certs] Generating \"front-proxy-ca\" certificate and key",
      "[certs] Generating \"front-proxy-client\" certificate and key",
      "[certs] Generating \"sa\" key and public key",
      "[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"",
      "[kubeconfig] Writing \"admin.conf\" kubeconfig file",
      "[kubeconfig] Writing \"kubelet.conf\" kubeconfig file",
      "[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file",
      "[kubeconfig] Writing \"scheduler.conf\" kubeconfig file",
      "[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"",
      "[control-plane] Creating static Pod manifest for \"kube-apiserver\"",
      "[control-plane] Creating static Pod manifest for \"kube-controller-manager\"",
      "[control-plane] Creating static Pod manifest for \"kube-scheduler\"",
      "[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\"",
      "[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\". This can take up to 4m0s",
      "[apiclient] All control plane components are healthy after 19.504023 seconds",
      "[uploadconfig] storing the configuration used in ConfigMap \"kubeadm-config\" in the \"kube-system\" Namespace",
      "[kubelet] Creating a ConfigMap \"kubelet-config-1.13\" in namespace kube-system with the configuration for the kubelets in the cluster",
      "[patchnode] Uploading the CRI Socket information \"/var/run/dockershim.sock\" to the Node API object \"localhost.localdomain\" as an annotation",
      "[mark-control-plane] Marking the node localhost.localdomain as control-plane by adding the label \"node-role.kubernetes.io/master=''\"",
      "[mark-control-plane] Marking the node localhost.localdomain as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]",
      "[bootstrap-token] Using token: orl7dl.vsy5bmmibw7o6cc6",
      "[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles",
      "[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials",
      "[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token",
      "[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster",
      "[bootstraptoken] creating the \"cluster-info\" ConfigMap in the \"kube-public\" namespace",
      "[addons] Applied essential addon: CoreDNS",
      "[addons] Applied essential addon: kube-proxy",
      "",
      "Your Kubernetes master has initialized successfully!",
      "",
      "To start using your cluster, you need to run the following as a regular user:",
      "",
      "  mkdir -p $HOME/.kube",
      "  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config",
      "  sudo chown $(id -u):$(id -g) $HOME/.kube/config",
      "",
      "You should now deploy a pod network to the cluster.",
      "Run \"kubectl apply -f [podnetwork].yaml\" with one of the options listed at:",
      "  https://kubernetes.io/docs/concepts/cluster-administration/addons/",
      "",
      "You can now join any number of machines by running the following on each node",
      "as root:",
      "",
      "  kubeadm join 10.0.0.101:6443 --token orl7dl.vsy5bmmibw7o6cc6 --discovery-token-ca-cert-hash sha256:a38a1b8f98a7695880fff2ce6a45ee90a77807d149c5400cc84af3fcf56fd8a9"
   ]
}TASK[  
   set_fact
]**************************************************************** 
ok:[  
   k8s-n1
]=>{  
   "ansible_facts":{  
      "kubeadm_join":"  kubeadm join 10.0.0.101:6443 --token orl7dl.vsy5bmmibw7o6cc6 --discovery-token-ca-cert-hash sha256:a38a1b8f98a7695880fff2ce6a45ee90a77807d149c5400cc84af3fcf56fd8a9"
   },
   "changed":false
}TASK[  
   debug
]******************************************************************* 
ok:[  
   k8s-n1
]=>{  
   "kubeadm_join":"  kubeadm join 10.0.0.101:6443 --token orl7dl.vsy5bmmibw7o6cc6 --discovery-token-ca-cert-hash sha256:a38a1b8f98a7695880fff2ce6a45ee90a77807d149c5400cc84af3fcf56fd8a9"
}TASK[  
   Aseta ymparistomuuttujat
]************************************************ 
changed:[  
   k8s-n1
]=>{  
   "changed":true,
   "cmd":"cp /etc/kubernetes/admin.conf /home/vagrant/ && chown vagrant:vagrant /home/vagrant/admin.conf && export KUBECONFIG=/home/vagrant/admin.conf && echo export KUBECONFIG=$KUBECONFIG >> /home/vagrant/.bashrc",
   "delta":"0:00:00.008628",
   "end":"2019-01-05 07:08:08.663360",
   "rc":0,
   "start":"2019-01-05 07:08:08.654732",
   "stderr":"",
   "stderr_lines":[  

   ],
   "stdout":"",
   "stdout_lines":[  

   ]
}PLAY[  
   Konfiguroi CNI-verkko
]*************************************************** 

TASK[  
   Gathering Facts
]********************************************************* 
ok:[  
   k8s-n1
]TASK[  
   sysctl
]****************************************************************** 
ok:[  
   k8s-n1
]=>{  
   "changed":false
}TASK[  
   sysctl
]****************************************************************** 
ok:[  
   k8s-n1
]=>{  
   "changed":false
}TASK[  
   Asenna Flannel-plugin
]*************************************************** 
changed:[  
   k8s-n1
]=>{  
   "changed":true,
   "cmd":"export KUBECONFIG=/home/vagrant/admin.conf ; kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml",
   "delta":"0:00:00.517346",
   "end":"2019-01-05 07:08:17.731759",
   "rc":0,
   "start":"2019-01-05 07:08:17.214413",
   "stderr":"",
   "stderr_lines":[  

   ],
   "stdout":"clusterrole.rbac.authorization.k8s.io/flannel created\nclusterrolebinding.rbac.authorization.k8s.io/flannel created\nserviceaccount/flannel created\nconfigmap/kube-flannel-cfg created\ndaemonset.extensions/kube-flannel-ds-amd64 created\ndaemonset.extensions/kube-flannel-ds-arm64 created\ndaemonset.extensions/kube-flannel-ds-arm created\ndaemonset.extensions/kube-flannel-ds-ppc64le created\ndaemonset.extensions/kube-flannel-ds-s390x created",
   "stdout_lines":[  
      "clusterrole.rbac.authorization.k8s.io/flannel created",
      "clusterrolebinding.rbac.authorization.k8s.io/flannel created",
      "serviceaccount/flannel created",
      "configmap/kube-flannel-cfg created",
      "daemonset.extensions/kube-flannel-ds-amd64 created",
      "daemonset.extensions/kube-flannel-ds-arm64 created",
      "daemonset.extensions/kube-flannel-ds-arm created",
      "daemonset.extensions/kube-flannel-ds-ppc64le created",
      "daemonset.extensions/kube-flannel-ds-s390x created"
   ]
}TASK[  
   shell
]******************************************************************* 
changed:[  
   k8s-n1
]=>{  
   "changed":true,
   "cmd":"sleep 10",
   "delta":"0:00:10.004446",
   "end":"2019-01-05 07:08:29.833488",
   "rc":0,
   "start":"2019-01-05 07:08:19.829042",
   "stderr":"",
   "stderr_lines":[  

   ],
   "stdout":"",
   "stdout_lines":[  

   ]
}PLAY[  
   Alusta kubernetes workerit
]********************************************** 

TASK[  
   Gathering Facts
]********************************************************* 
ok:[  
   k8s-n3
]ok:[  
   k8s-n2
]TASK[  
   kubeadm reset
]*********************************************************** 
changed:[  
   k8s-n3
]=>{  
   "changed":true,
   "cmd":"kubeadm reset -f",
   "delta":"0:00:00.085388",
   "end":"2019-01-05 07:08:34.547407",
   "rc":0,
   "start":"2019-01-05 07:08:34.462019",
   "stderr":"",
   "stderr_lines":[  

   ],
   ...
}changed:[  
   k8s-n2
]=>{  
   "changed":true,
   "cmd":"kubeadm reset -f",
   "delta":"0:00:00.086224",
   "end":"2019-01-05 07:08:34.600794",
   "rc":0,
   "start":"2019-01-05 07:08:34.514570",
   "stderr":"",
   "stderr_lines":[  

   ],
   "stdout":"[preflight] running pre-flight checks\n[reset] no etcd config found. Assuming external etcd\n[reset] please manually reset etcd to prevent further issues\n[reset] stopping the kubelet service\n[reset] unmounting mounted directories in \"/var/lib/kubelet\"\n[reset] deleting contents of stateful directories: [/var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/run/kubernetes]\n[reset] deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]\n[reset] deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]\n\nThe reset process does not reset or clean up iptables rules or IPVS tables.\nIf you wish to reset iptables, you must do so manually.\nFor example: \niptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X\n\nIf your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)\nto reset your system's IPVS tables.",
   "stdout_lines":[  
      "[preflight] running pre-flight checks",
      "[reset] no etcd config found. Assuming external etcd",
      "[reset] please manually reset etcd to prevent further issues",
      "[reset] stopping the kubelet service",
      "[reset] unmounting mounted directories in \"/var/lib/kubelet\"",
      "[reset] deleting contents of stateful directories: [/var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/run/kubernetes]",
      "[reset] deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]",
      "[reset] deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]",
      "",
      "The reset process does not reset or clean up iptables rules or IPVS tables.",
      "If you wish to reset iptables, you must do so manually.",
      "For example: ",
      "iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X",
      "",
      "If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)",
      "to reset your system's IPVS tables."
   ]
}TASK[  
   kubeadm join
]************************************************************ 
changed:[  
   k8s-n3
]=>{  
   "changed":true,
   "cmd":"  kubeadm join 10.0.0.101:6443 --token orl7dl.vsy5bmmibw7o6cc6 --discovery-token-ca-cert-hash sha256:a38a1b8f98a7695880fff2ce6a45ee90a77807d149c5400cc84af3fcf56fd8a9",
   "delta":"0:00:01.988676",
   "end":"2019-01-05 07:08:38.771956",
   "rc":0,
   "start":"2019-01-05 07:08:36.783280",
   "stderr":"\t[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.0. Latest validated version: 18.06",
   "stderr_lines":[  
      "\t[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.0. Latest validated version: 18.06"
   ],
   "stdout":"[preflight] Running pre-flight checks\n[discovery] Trying to connect to API Server \"10.0.0.101:6443\"\n[discovery] Created cluster-info discovery client, requesting info from \"https://10.0.0.101:6443\"\n[discovery] Requesting info from \"https://10.0.0.101:6443\" again to validate TLS against the pinned public key\n[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server \"10.0.0.101:6443\"\n[discovery] Successfully established connection with API Server \"10.0.0.101:6443\"\n[join] Reading configuration from the cluster...\n[join] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'\n[kubelet] Downloading configuration for the kubelet from the \"kubelet-config-1.13\" ConfigMap in the kube-system namespace\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Activating the kubelet service\n[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...\n[patchnode] Uploading the CRI Socket information \"/var/run/dockershim.sock\" to the Node API object \"localhost.localdomain\" as an annotation\n\nThis node has joined the cluster:\n* Certificate signing request was sent to apiserver and a response was received.\n* The Kubelet was informed of the new secure connection details.\n\nRun 'kubectl get nodes' on the master to see this node join the cluster.",
   "stdout_lines":[  
      "[preflight] Running pre-flight checks",
      "[discovery] Trying to connect to API Server \"10.0.0.101:6443\"",
      "[discovery] Created cluster-info discovery client, requesting info from \"https://10.0.0.101:6443\"",
      "[discovery] Requesting info from \"https://10.0.0.101:6443\" again to validate TLS against the pinned public key",
      "[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server \"10.0.0.101:6443\"",
      "[discovery] Successfully established connection with API Server \"10.0.0.101:6443\"",
      "[join] Reading configuration from the cluster...",
      "[join] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'",
      "[kubelet] Downloading configuration for the kubelet from the \"kubelet-config-1.13\" ConfigMap in the kube-system namespace",
      "[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"",
      "[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"",
      "[kubelet-start] Activating the kubelet service",
      "[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...",
      "[patchnode] Uploading the CRI Socket information \"/var/run/dockershim.sock\" to the Node API object \"localhost.localdomain\" as an annotation",
      "",
      "This node has joined the cluster:",
      "* Certificate signing request was sent to apiserver and a response was received.",
      "* The Kubelet was informed of the new secure connection details.",
      "",
      "Run 'kubectl get nodes' on the master to see this node join the cluster."
   ]
}changed:[  
   k8s-n2
]=>{  
   "changed":true,
   "cmd":"  kubeadm join 10.0.0.101:6443 --token orl7dl.vsy5bmmibw7o6cc6 --discovery-token-ca-cert-hash sha256:a38a1b8f98a7695880fff2ce6a45ee90a77807d149c5400cc84af3fcf56fd8a9",
   "delta":"0:00:02.000874",
   "end":"2019-01-05 07:08:38.979256",
   "rc":0,
   "start":"2019-01-05 07:08:36.978382",
   "stderr":"\t[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.0. Latest validated version: 18.06",
   "stderr_lines":[  
      "\t[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.0. Latest validated version: 18.06"
   ],
   "stdout":"[preflight] Running pre-flight checks\n[discovery] Trying to connect to API Server \"10.0.0.101:6443\"\n[discovery] Created cluster-info discovery client, requesting info from \"https://10.0.0.101:6443\"\n[discovery] Requesting info from \"https://10.0.0.101:6443\" again to validate TLS against the pinned public key\n[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server \"10.0.0.101:6443\"\n[discovery] Successfully established connection with API Server \"10.0.0.101:6443\"\n[join] Reading configuration from the cluster...\n[join] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'\n[kubelet] Downloading configuration for the kubelet from the \"kubelet-config-1.13\" ConfigMap in the kube-system namespace\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Activating the kubelet service\n[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...\n[patchnode] Uploading the CRI Socket information \"/var/run/dockershim.sock\" to the Node API object \"localhost.localdomain\" as an annotation\n\nThis node has joined the cluster:\n* Certificate signing request was sent to apiserver and a response was received.\n* The Kubelet was informed of the new secure connection details.\n\nRun 'kubectl get nodes' on the master to see this node join the cluster.",
   "stdout_lines":[  
      "[preflight] Running pre-flight checks",
      "[discovery] Trying to connect to API Server \"10.0.0.101:6443\"",
      "[discovery] Created cluster-info discovery client, requesting info from \"https://10.0.0.101:6443\"",
      "[discovery] Requesting info from \"https://10.0.0.101:6443\" again to validate TLS against the pinned public key",
      "[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server \"10.0.0.101:6443\"",
      "[discovery] Successfully established connection with API Server \"10.0.0.101:6443\"",
      "[join] Reading configuration from the cluster...",
      "[join] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'",
      "[kubelet] Downloading configuration for the kubelet from the \"kubelet-config-1.13\" ConfigMap in the kube-system namespace",
      "[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"",
      "[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"",
      "[kubelet-start] Activating the kubelet service",
      "[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...",
      "[patchnode] Uploading the CRI Socket information \"/var/run/dockershim.sock\" to the Node API object \"localhost.localdomain\" as an annotation",
      "",
      "This node has joined the cluster:",
      "* Certificate signing request was sent to apiserver and a response was received.",
      "* The Kubelet was informed of the new secure connection details.",
      "",
      "Run 'kubectl get nodes' on the master to see this node join the cluster."
   ]
}PLAY RECAP ********************************************************************* 
k8s-n1:ok=24   changed=16   unreachable=0    failed=0 
k8s-n2:ok=16   changed=13   unreachable=0    failed=0 
k8s-n3:ok=16   changed=13   unreachable=0    failed=0

.

[vagrant@localhost ~]$ kubectl get events -a
Flag --show-all has been deprecated, will be removed in an upcoming release
LAST SEEN   TYPE      REASON     KIND   MESSAGE
3m15s       Warning   Rebooted   Node   Node localhost.localdomain has been rebooted, boot id: 72f6776d-c267-4e31-8e6d-a4d36da1d510
3m16s       Warning   Rebooted   Node   Node localhost.localdomain has been rebooted, boot id: 2d68a2c8-e27a-45ff-b7d7-5ce33c9e1cc4
4m2s        Warning   Rebooted   Node   Node localhost.localdomain has been rebooted, boot id: 0213bbdf-f4cd-4e19-968e-8162d95de9a6

默认情况下,节点 (kubelet) 使用主机名来标识自己。您的 VM 的主机名似乎未设置。

Vagrantfile 中,将 hostname 值设置为每个 VM 的不同名称。 https://www.vagrantup.com/docs/vagrantfile/machine_settings.html#config-vm-hostname