k8s coredns pods 由于 属性 max_concurrent 显示 CrashLoopBackOff
k8s coredns pods showing CrashLoopBackOff due to property max_concurrent
我相信这个问题是在 k8s 从 1.18 升级到 1.19 后出现的。升级集群后没检查
k8scka@master:~$ kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-7dbc97f587-dfbwl 1/1 Running 0 8d
calico-node-jkgwv 1/1 Running 0 10d
calico-node-wkncc 1/1 Running 0 10d
coredns-66bff467f8-frh49 0/1 CrashLoopBackOff 2093 7d10h
coredns-66bff467f8-wlb22 0/1 CrashLoopBackOff 2092 7d10h
etcd-master 1/1 Running 0 8d
kube-apiserver-master 1/1 Running 0 8d
kube-controller-manager-master 1/1 Running 0 8d
kube-proxy-ljz55 1/1 Running 0 8d
kube-proxy-w8nvg 1/1 Running 0 8d
kube-scheduler-master 1/1 Running 0 8d
k8scka@master:~$
验证了 coredns CM 并删除了 属性 max_concurrent。
问题自动解决。
否则,
我们可以升级coredns来解决问题。
我相信这个问题是在 k8s 从 1.18 升级到 1.19 后出现的。升级集群后没检查
k8scka@master:~$ kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-7dbc97f587-dfbwl 1/1 Running 0 8d
calico-node-jkgwv 1/1 Running 0 10d
calico-node-wkncc 1/1 Running 0 10d
coredns-66bff467f8-frh49 0/1 CrashLoopBackOff 2093 7d10h
coredns-66bff467f8-wlb22 0/1 CrashLoopBackOff 2092 7d10h
etcd-master 1/1 Running 0 8d
kube-apiserver-master 1/1 Running 0 8d
kube-controller-manager-master 1/1 Running 0 8d
kube-proxy-ljz55 1/1 Running 0 8d
kube-proxy-w8nvg 1/1 Running 0 8d
kube-scheduler-master 1/1 Running 0 8d
k8scka@master:~$
验证了 coredns CM 并删除了 属性 max_concurrent。 问题自动解决。
否则, 我们可以升级coredns来解决问题。