microk8s coredns 崩溃循环回退
microk8s coredns CrashLoopBackOff
我在 Ubuntu 上使用 microk8s,但我遇到了 coredns pod 无法启动的问题,我怀疑这给我带来了其他 pods 问题。
当 运行 得到 pods
时,Pod 显示 CrashLoopBackOff 状态
这里是pod的描述:
Name: coredns-86f78bb79c-bdt7t
Namespace: kube-system
Priority: 2000000000
Priority Class Name: system-cluster-critical
Node: chwc21ubtu/10.2.2.10
Start Time: Tue, 25 Jan 2022 09:35:45 +0000
Labels: k8s-app=kube-dns
pod-template-hash=86f78bb79c
Annotations: cni.projectcalico.org/podIP: 10.1.249.135/32
cni.projectcalico.org/podIPs: 10.1.249.135/32
scheduler.alpha.kubernetes.io/critical-pod:
Status: Running
IP: 10.1.249.135
IPs:
IP: 10.1.249.135
Controlled By: ReplicaSet/coredns-86f78bb79c
Containers:
coredns:
Container ID: containerd://045a0cdd5d6e1b736f9f7469a189cbbc2c87df56c2af62bcdc825eda0aa3719c
Image: coredns/coredns:1.6.6
Image ID: docker.io/coredns/coredns@sha256:41bee6992c2ed0f4628fcef75751048927bcd6b1cee89c79f6acb63ca5474d5a
Ports: 53/UDP, 53/TCP, 9153/TCP
Host Ports: 0/UDP, 0/TCP, 0/TCP
Args:
-conf
/etc/coredns/Corefile
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Tue, 25 Jan 2022 11:54:55 +0000
Finished: Tue, 25 Jan 2022 11:54:55 +0000
Ready: False
Restart Count: 32
Limits:
memory: 170Mi
Requests:
cpu: 100m
memory: 70Mi
Liveness: http-get http://:8080/health delay=60s timeout=5s period=10s #success=1 #failure=5
Readiness: http-get http://:8181/ready delay=0s timeout=1s period=10s #success=1 #failure=3
Environment: <none>
Mounts:
/etc/coredns from config-volume (ro)
/var/run/secrets/kubernetes.io/serviceaccount from coredns-token-rj6x7 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
config-volume:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: coredns
Optional: false
coredns-token-rj6x7:
Type: Secret (a volume populated by a Secret)
SecretName: coredns-token-rj6x7
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: CriticalAddonsOnly op=Exists
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning BackOff 3m56s (x663 over 143m) kubelet Back-off restarting failed container
这里是/etc/resolv/conf的内容:
cat /etc/resolv.conf
# This file is managed by man:systemd-resolved(8). Do not edit.
#
# This is a dynamic resolv.conf file for connecting local clients to the
# internal DNS stub resolver of systemd-resolved. This file lists all
# configured search domains.
#
# Run "resolvectl status" to see details about the uplink DNS servers
# currently in use.
#
# Third party programs must not access this file directly, but only through the
# symlink at /etc/resolv.conf. To manage man:resolv.conf(5) in a different way,
# replace this symlink by a static file or a different symlink.
#
# See man:systemd-resolved.service(8) for details about the supported modes of
# operation for /etc/resolv.conf.
nameserver 127.0.0.53
options edns0 trust-ad
search wnzgl0qq22vevmrhjm1h5bny0c.zx.internal.cloudapp.net
日志内容如下:
sudo microk8s.kubectl -n kube-system logs -p coredns-86f78bb79c-bdt7t
plugin/forward: not an IP address or file: "reload"
查看 coreDNS 的配置映射,似乎有一个名为 'reload' 的额外值被归类为 IP forward/file
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
data:
Corefile: |
.:53 {
errors
health {
lameduck 5s
}
ready
log . {
class error
}
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
}
prometheus :9153
forward . 8.8.8.8 8.8.4.4 reload
loadbalance
}
kind: ConfigMap
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","data":{"Corefile":".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n log . {\n class error\n }\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n }\n prometheus :9153\n forward . 8.8.8.8 8.8.4.4 reload\n loadbalance\n}\n"},"kind":"ConfigMap","metadata":{"annotations":{},"creationTimestamp":"2022-01-25T09:35:32Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists","k8s-app":"kube-dns"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:data":{".":{},"f:Corefile":{}},"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{}}}},"manager":"kubectl-client-side-apply","operation":"Update","time":"2022-01-25T09:35:32Z"}],"name":"coredns","namespace":"kube-system","resourceVersion":"518","selfLink":"/api/v1/namespaces/kube-system/configmaps/coredns","uid":"8a73848a-4118-417b-80f5-2175bb64acc4"}}
creationTimestamp: "2022-01-25T09:35:32Z"
labels:
addonmanager.kubernetes.io/mode: EnsureExists
k8s-app: kube-dns
name: coredns
namespace: kube-system
resourceVersion: "597"
selfLink: /api/v1/namespaces/kube-system/configmaps/coredns
uid: 8a73848a-4118-417b-80f5-2175bb64acc4
我不知道下一步要看哪里 - 谁能提供任何建议?
退出代码 1
表示应用程序错误。
检查此 link 以获取有关退出代码的更多信息。
已解决!
在配置映射中,单词 'reload' 之前应该有一个卡拉格 return。我使用以下命令和 vim;
修改了配置映射
我的配置图现在看起来像这样
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
data:
Corefile: |
.:53 {
errors
health {
lameduck 5s
}
ready
log . {
class error
}
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
}
prometheus :9153
forward . 8.8.8.8 8.8.4.4
reload
loadbalance
}
kind: ConfigMap
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","data":{"Corefile":".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n log . {\n class error\n }\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n }\n prometheus :9153\n forward . 8.8.8.8 8.8.4.4 **reload**\n loadbalance\n}\n"},"kind":"ConfigMap","metadata":{"annotations":{},"creationTimestamp":"2022-01-25T09:35:32Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists","k8s-app":"kube-dns"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:data":{".":{},"f:Corefile":{}},"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{}}}},"manager":"kubectl-client-side-apply","operation":"Update","time":"2022-01-25T09:35:32Z"}],"name":"coredns","namespace":"kube-system","resourceVersion":"518","selfLink":"/api/v1/namespaces/kube-system/configmaps/coredns","uid":"8a73848a-4118-417b-80f5-2175bb64acc4"}}
creationTimestamp: "2022-01-25T09:35:32Z"
labels:
addonmanager.kubernetes.io/mode: EnsureExists
k8s-app: kube-dns
name: coredns
namespace: kube-system
resourceVersion: "18764"
selfLink: /api/v1/namespaces/kube-system/configmaps/coredns
uid: 8a73848a-4118-417b-80f5-2175bb64acc4
~ ~ ~ ~
sudo microk8s.kubectl -n kube-system edit configmaps coredns -o yaml
然后我使用
重新启动所有 pods
sudo microk8s.kubectl -n kube-system rollout restart deploy
我在 Ubuntu 上使用 microk8s,但我遇到了 coredns pod 无法启动的问题,我怀疑这给我带来了其他 pods 问题。
当 运行 得到 pods
时,Pod 显示 CrashLoopBackOff 状态这里是pod的描述:
Name: coredns-86f78bb79c-bdt7t
Namespace: kube-system
Priority: 2000000000
Priority Class Name: system-cluster-critical
Node: chwc21ubtu/10.2.2.10
Start Time: Tue, 25 Jan 2022 09:35:45 +0000
Labels: k8s-app=kube-dns
pod-template-hash=86f78bb79c
Annotations: cni.projectcalico.org/podIP: 10.1.249.135/32
cni.projectcalico.org/podIPs: 10.1.249.135/32
scheduler.alpha.kubernetes.io/critical-pod:
Status: Running
IP: 10.1.249.135
IPs:
IP: 10.1.249.135
Controlled By: ReplicaSet/coredns-86f78bb79c
Containers:
coredns:
Container ID: containerd://045a0cdd5d6e1b736f9f7469a189cbbc2c87df56c2af62bcdc825eda0aa3719c
Image: coredns/coredns:1.6.6
Image ID: docker.io/coredns/coredns@sha256:41bee6992c2ed0f4628fcef75751048927bcd6b1cee89c79f6acb63ca5474d5a
Ports: 53/UDP, 53/TCP, 9153/TCP
Host Ports: 0/UDP, 0/TCP, 0/TCP
Args:
-conf
/etc/coredns/Corefile
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Tue, 25 Jan 2022 11:54:55 +0000
Finished: Tue, 25 Jan 2022 11:54:55 +0000
Ready: False
Restart Count: 32
Limits:
memory: 170Mi
Requests:
cpu: 100m
memory: 70Mi
Liveness: http-get http://:8080/health delay=60s timeout=5s period=10s #success=1 #failure=5
Readiness: http-get http://:8181/ready delay=0s timeout=1s period=10s #success=1 #failure=3
Environment: <none>
Mounts:
/etc/coredns from config-volume (ro)
/var/run/secrets/kubernetes.io/serviceaccount from coredns-token-rj6x7 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
config-volume:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: coredns
Optional: false
coredns-token-rj6x7:
Type: Secret (a volume populated by a Secret)
SecretName: coredns-token-rj6x7
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: CriticalAddonsOnly op=Exists
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning BackOff 3m56s (x663 over 143m) kubelet Back-off restarting failed container
这里是/etc/resolv/conf的内容:
cat /etc/resolv.conf
# This file is managed by man:systemd-resolved(8). Do not edit.
#
# This is a dynamic resolv.conf file for connecting local clients to the
# internal DNS stub resolver of systemd-resolved. This file lists all
# configured search domains.
#
# Run "resolvectl status" to see details about the uplink DNS servers
# currently in use.
#
# Third party programs must not access this file directly, but only through the
# symlink at /etc/resolv.conf. To manage man:resolv.conf(5) in a different way,
# replace this symlink by a static file or a different symlink.
#
# See man:systemd-resolved.service(8) for details about the supported modes of
# operation for /etc/resolv.conf.
nameserver 127.0.0.53
options edns0 trust-ad
search wnzgl0qq22vevmrhjm1h5bny0c.zx.internal.cloudapp.net
日志内容如下:
sudo microk8s.kubectl -n kube-system logs -p coredns-86f78bb79c-bdt7t
plugin/forward: not an IP address or file: "reload"
查看 coreDNS 的配置映射,似乎有一个名为 'reload' 的额外值被归类为 IP forward/file
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
data:
Corefile: |
.:53 {
errors
health {
lameduck 5s
}
ready
log . {
class error
}
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
}
prometheus :9153
forward . 8.8.8.8 8.8.4.4 reload
loadbalance
}
kind: ConfigMap
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","data":{"Corefile":".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n log . {\n class error\n }\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n }\n prometheus :9153\n forward . 8.8.8.8 8.8.4.4 reload\n loadbalance\n}\n"},"kind":"ConfigMap","metadata":{"annotations":{},"creationTimestamp":"2022-01-25T09:35:32Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists","k8s-app":"kube-dns"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:data":{".":{},"f:Corefile":{}},"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{}}}},"manager":"kubectl-client-side-apply","operation":"Update","time":"2022-01-25T09:35:32Z"}],"name":"coredns","namespace":"kube-system","resourceVersion":"518","selfLink":"/api/v1/namespaces/kube-system/configmaps/coredns","uid":"8a73848a-4118-417b-80f5-2175bb64acc4"}}
creationTimestamp: "2022-01-25T09:35:32Z"
labels:
addonmanager.kubernetes.io/mode: EnsureExists
k8s-app: kube-dns
name: coredns
namespace: kube-system
resourceVersion: "597"
selfLink: /api/v1/namespaces/kube-system/configmaps/coredns
uid: 8a73848a-4118-417b-80f5-2175bb64acc4
我不知道下一步要看哪里 - 谁能提供任何建议?
退出代码 1
表示应用程序错误。
检查此 link 以获取有关退出代码的更多信息。
已解决!
在配置映射中,单词 'reload' 之前应该有一个卡拉格 return。我使用以下命令和 vim;
修改了配置映射我的配置图现在看起来像这样
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
data:
Corefile: |
.:53 {
errors
health {
lameduck 5s
}
ready
log . {
class error
}
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
}
prometheus :9153
forward . 8.8.8.8 8.8.4.4
reload
loadbalance
}
kind: ConfigMap
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","data":{"Corefile":".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n log . {\n class error\n }\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n }\n prometheus :9153\n forward . 8.8.8.8 8.8.4.4 **reload**\n loadbalance\n}\n"},"kind":"ConfigMap","metadata":{"annotations":{},"creationTimestamp":"2022-01-25T09:35:32Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists","k8s-app":"kube-dns"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:data":{".":{},"f:Corefile":{}},"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{}}}},"manager":"kubectl-client-side-apply","operation":"Update","time":"2022-01-25T09:35:32Z"}],"name":"coredns","namespace":"kube-system","resourceVersion":"518","selfLink":"/api/v1/namespaces/kube-system/configmaps/coredns","uid":"8a73848a-4118-417b-80f5-2175bb64acc4"}}
creationTimestamp: "2022-01-25T09:35:32Z"
labels:
addonmanager.kubernetes.io/mode: EnsureExists
k8s-app: kube-dns
name: coredns
namespace: kube-system
resourceVersion: "18764"
selfLink: /api/v1/namespaces/kube-system/configmaps/coredns
uid: 8a73848a-4118-417b-80f5-2175bb64acc4
~ ~ ~ ~
sudo microk8s.kubectl -n kube-system edit configmaps coredns -o yaml
然后我使用
重新启动所有 podssudo microk8s.kubectl -n kube-system rollout restart deploy