为什么 pod 会自行终止?
Why pod terminate it self?
我正在尝试使用 bitnami helm chat 安装 fluend with elasticsearch 和 kibana。
我正在关注下面提到的文章
Integrate Logging Kubernetes Kibana ElasticSearch Fluentd
但是当我部署 elasticsearch 时,它的 pod 继续 Terminating
或 Back-off
状态。
我从 3 天起就一直卡在这个问题上,感谢任何帮助。
事件:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 41m (x2 over 41m) default-scheduler error while running "VolumeBinding" filter plugin for pod "elasticsearch-master-0": pod has unbound immediate PersistentVolumeClaims
Normal Scheduled 41m default-scheduler Successfully assigned default/elasticsearch-master-0 to minikube
Normal Pulling 41m kubelet, minikube Pulling image "busybox:latest"
Normal Pulled 41m kubelet, minikube Successfully pulled image "busybox:latest"
Normal Created 41m kubelet, minikube Created container sysctl
Normal Started 41m kubelet, minikube Started container sysctl
Normal Pulling 41m kubelet, minikube Pulling image "docker.elastic.co/elasticsearch/elasticsearch-oss:6.8.6"
Normal Pulled 39m kubelet, minikube Successfully pulled image "docker.elastic.co/elasticsearch/elasticsearch-oss:6.8.6"
Normal Created 39m kubelet, minikube Created container chown
Normal Started 39m kubelet, minikube Started container chown
Normal Created 38m kubelet, minikube Created container elasticsearch
Normal Started 38m kubelet, minikube Started container elasticsearch
Warning Unhealthy 38m kubelet, minikube Readiness probe failed: Get http://172.17.0.7:9200/_cluster/health?local=true: dial tcp 172.17.0.7:9200: connect: connection refused
Normal Pulled 38m (x2 over 38m) kubelet, minikube Container image "docker.elastic.co/elasticsearch/elasticsearch-oss:6.8.6" already present on machine
Warning FailedMount 32m kubelet, minikube MountVolume.SetUp failed for volume "config" : failed to sync configmap cache: timed out waiting for the condition
Normal SandboxChanged 32m kubelet, minikube Pod sandbox changed, it will be killed and re-created.
Normal Pulling 32m kubelet, minikube Pulling image "busybox:latest"
Normal Pulled 32m kubelet, minikube Successfully pulled image "busybox:latest"
Normal Created 32m kubelet, minikube Created container sysctl
Normal Started 32m kubelet, minikube Started container sysctl
Normal Pulled 32m kubelet, minikube Container image "docker.elastic.co/elasticsearch/elasticsearch-oss:6.8.6" already present on machine
Normal Created 32m kubelet, minikube Created container chown
Normal Started 32m kubelet, minikube Started container chown
Normal Pulled 32m (x2 over 32m) kubelet, minikube Container image "docker.elastic.co/elasticsearch/elasticsearch-oss:6.8.6" already present on machine
Normal Created 32m (x2 over 32m) kubelet, minikube Created container elasticsearch
Normal Started 32m (x2 over 32m) kubelet, minikube Started container elasticsearch
Warning Unhealthy 32m kubelet, minikube Readiness probe failed: Get http://172.17.0.6:9200/_cluster/health?local=true: dial tcp 172.17.0.6:9200: connect: connection refused
Warning BackOff 32m (x2 over 32m) kubelet, minikube Back-off restarting failed container
简答:它崩溃了。您可以检查 Pod 状态对象以获取一些详细信息,例如退出状态以及是否是 oomkill,然后查看容器日志以查看它们是否显示任何内容。
这里的问题是 pod 具有未绑定的即时 PersistentVolumeClaims。您可以在使用 helm 部署它时将 master.persistence.enabled
设置为 false。或者,您需要检查集群中是否存在默认存储 class,如果不存在,则创建存储 class 并将其设为默认。
我正在尝试使用 bitnami helm chat 安装 fluend with elasticsearch 和 kibana。
我正在关注下面提到的文章
Integrate Logging Kubernetes Kibana ElasticSearch Fluentd
但是当我部署 elasticsearch 时,它的 pod 继续 Terminating
或 Back-off
状态。
我从 3 天起就一直卡在这个问题上,感谢任何帮助。
事件:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 41m (x2 over 41m) default-scheduler error while running "VolumeBinding" filter plugin for pod "elasticsearch-master-0": pod has unbound immediate PersistentVolumeClaims
Normal Scheduled 41m default-scheduler Successfully assigned default/elasticsearch-master-0 to minikube
Normal Pulling 41m kubelet, minikube Pulling image "busybox:latest"
Normal Pulled 41m kubelet, minikube Successfully pulled image "busybox:latest"
Normal Created 41m kubelet, minikube Created container sysctl
Normal Started 41m kubelet, minikube Started container sysctl
Normal Pulling 41m kubelet, minikube Pulling image "docker.elastic.co/elasticsearch/elasticsearch-oss:6.8.6"
Normal Pulled 39m kubelet, minikube Successfully pulled image "docker.elastic.co/elasticsearch/elasticsearch-oss:6.8.6"
Normal Created 39m kubelet, minikube Created container chown
Normal Started 39m kubelet, minikube Started container chown
Normal Created 38m kubelet, minikube Created container elasticsearch
Normal Started 38m kubelet, minikube Started container elasticsearch
Warning Unhealthy 38m kubelet, minikube Readiness probe failed: Get http://172.17.0.7:9200/_cluster/health?local=true: dial tcp 172.17.0.7:9200: connect: connection refused
Normal Pulled 38m (x2 over 38m) kubelet, minikube Container image "docker.elastic.co/elasticsearch/elasticsearch-oss:6.8.6" already present on machine
Warning FailedMount 32m kubelet, minikube MountVolume.SetUp failed for volume "config" : failed to sync configmap cache: timed out waiting for the condition
Normal SandboxChanged 32m kubelet, minikube Pod sandbox changed, it will be killed and re-created.
Normal Pulling 32m kubelet, minikube Pulling image "busybox:latest"
Normal Pulled 32m kubelet, minikube Successfully pulled image "busybox:latest"
Normal Created 32m kubelet, minikube Created container sysctl
Normal Started 32m kubelet, minikube Started container sysctl
Normal Pulled 32m kubelet, minikube Container image "docker.elastic.co/elasticsearch/elasticsearch-oss:6.8.6" already present on machine
Normal Created 32m kubelet, minikube Created container chown
Normal Started 32m kubelet, minikube Started container chown
Normal Pulled 32m (x2 over 32m) kubelet, minikube Container image "docker.elastic.co/elasticsearch/elasticsearch-oss:6.8.6" already present on machine
Normal Created 32m (x2 over 32m) kubelet, minikube Created container elasticsearch
Normal Started 32m (x2 over 32m) kubelet, minikube Started container elasticsearch
Warning Unhealthy 32m kubelet, minikube Readiness probe failed: Get http://172.17.0.6:9200/_cluster/health?local=true: dial tcp 172.17.0.6:9200: connect: connection refused
Warning BackOff 32m (x2 over 32m) kubelet, minikube Back-off restarting failed container
简答:它崩溃了。您可以检查 Pod 状态对象以获取一些详细信息,例如退出状态以及是否是 oomkill,然后查看容器日志以查看它们是否显示任何内容。
这里的问题是 pod 具有未绑定的即时 PersistentVolumeClaims。您可以在使用 helm 部署它时将 master.persistence.enabled
设置为 false。或者,您需要检查集群中是否存在默认存储 class,如果不存在,则创建存储 class 并将其设为默认。