Kubernetes kubeadm 重置错误 - 无法重置

Kubernetes kubeadm reset error - unable to reset

我使用 kubeadm 初始化了 k8,现在当我尝试使用 kubeadm reset 重置时,出现以下错误。我搜索了几个论坛,但找不到任何答案

> "level":"warn","ts":"2020-05-28T11:57:52.940+0200","caller":"clientv3/retry_interceptor.go:61","msg":"retrying
> o                                                                     
> f unary invoker
> failed","target":"endpoint://client-e6d5f25b-0ed2-400f-b4d7-2ccabb09a838/192.168.178.200:2379","a
> ttempt":0,"error":"rpc error: code = Unknown desc = etcdserver:
> re-configuration failed due to not enough started                     
> members"}

主节点状态显示为未就绪,我无法重置网络插件 (weave)

  ubuntu@ubuntu-nuc-masternode:~$ kubectl get nodes
NAME         STATUS                        ROLES    AGE   VERSION
ubuntu-nuc   NotReady,SchedulingDisabled   master   20h   v1.18.3

我试过强制重置,但没用。非常感谢任何帮助

这似乎是报告的问题 kubeadm reset takes more than 50 seconds to retry deleting the last etcd member which was moved here

修复已于 5 月 28 日 kubeadm: skip removing last etcd member in reset phase 提交。

What type of PR is this?
/kind bug

What this PR does / why we need it:
If this is the last etcd member of the cluster, it cannot be removed due to "not enough started members". Skip it as the cluster will be destroyed in the next phase, otherwise the retries with exponential backoff will take more than 50 seconds to proceed.

Which issue(s) this PR fixes:

Fixes kubernetes/kubeadm#2144

Special notes for your reviewer:

Does this PR introduce a user-facing change?:

kubeadm: during "reset" do not remove the only remaining stacked etcd member from the cluster and just proceed with the cleanup of the local etcd storage.