Kubernetes - 容器镜像已经存在于机器上
Kubernetes - Container image already present on machine
所以我在 k8s 上有 2 个类似的部署,它们从 GitLab 中提取相同的图像。显然,这导致我的第二次部署出现 CrashLoopBackOff
错误,而且我似乎无法连接到端口以检查我的 pod 的 /healthz
。记录 pod 显示 pod 收到中断信号,同时描述 pod 显示以下消息。
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
29m 29m 1 default-scheduler Normal Scheduled Successfully assigned java-kafka-rest-kafka-data-2-development-5c6f7f597-5t2mr to 172.18.14.110
29m 29m 1 kubelet, 172.18.14.110 Normal SuccessfulMountVolume MountVolume.SetUp succeeded for volume "default-token-m4m55"
29m 29m 1 kubelet, 172.18.14.110 spec.containers{consul} Normal Pulled Container image "..../consul-image:0.0.10" already present on machine
29m 29m 1 kubelet, 172.18.14.110 spec.containers{consul} Normal Created Created container
29m 29m 1 kubelet, 172.18.14.110 spec.containers{consul} Normal Started Started container
28m 28m 1 kubelet, 172.18.14.110 spec.containers{java-kafka-rest-development} Normal Killing Killing container with id docker://java-kafka-rest-development:Container failed liveness probe.. Container will be killed and recreated.
29m 28m 2 kubelet, 172.18.14.110 spec.containers{java-kafka-rest-development} Normal Created Created container
29m 28m 2 kubelet, 172.18.14.110 spec.containers{java-kafka-rest-development} Normal Started Started container
29m 27m 10 kubelet, 172.18.14.110 spec.containers{java-kafka-rest-development} Warning Unhealthy Readiness probe failed: Get http://10.5.59.35:7533/healthz: dial tcp 10.5.59.35:7533: getsockopt: connection refused
28m 24m 13 kubelet, 172.18.14.110 spec.containers{java-kafka-rest-development} Warning Unhealthy Liveness probe failed: Get http://10.5.59.35:7533/healthz: dial tcp 10.5.59.35:7533: getsockopt: connection refused
29m 19m 8 kubelet, 172.18.14.110 spec.containers{java-kafka-rest-development} Normal Pulled Container image "r..../java-kafka-rest:0.3.2-dev" already present on machine
24m 4m 73 kubelet, 172.18.14.110 spec.containers{java-kafka-rest-development} Warning BackOff Back-off restarting failed container
我尝试过重新部署不同映像下的部署,它似乎工作得很好。但是我认为这不会有效,因为图像始终相同。我该如何继续呢?
这是我的部署文件的样子:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: "java-kafka-rest-kafka-data-2-development"
labels:
repository: "java-kafka-rest"
project: "java-kafka-rest"
service: "java-kafka-rest-kafka-data-2"
env: "development"
spec:
replicas: 1
selector:
matchLabels:
repository: "java-kafka-rest"
project: "java-kafka-rest"
service: "java-kafka-rest-kafka-data-2"
env: "development"
template:
metadata:
labels:
repository: "java-kafka-rest"
project: "java-kafka-rest"
service: "java-kafka-rest-kafka-data-2"
env: "development"
release: "0.3.2-dev"
spec:
imagePullSecrets:
- name: ...
containers:
- name: java-kafka-rest-development
image: registry...../java-kafka-rest:0.3.2-dev
env:
- name: DEPLOYMENT_COMMIT_HASH
value: "0.3.2-dev"
- name: DEPLOYMENT_PORT
value: "7533"
livenessProbe:
httpGet:
path: /healthz
port: 7533
initialDelaySeconds: 30
timeoutSeconds: 1
readinessProbe:
httpGet:
path: /healthz
port: 7533
timeoutSeconds: 1
ports:
- containerPort: 7533
resources:
requests:
cpu: 0.5
memory: 6Gi
limits:
cpu: 3
memory: 10Gi
command:
- /envconsul
- -consul=127.0.0.1:8500
- -sanitize
- -upcase
- -prefix=java-kafka-rest/
- -prefix=java-kafka-rest/kafka-data-2
- java
- -jar
- /build/libs/java-kafka-rest-0.3.2-dev.jar
securityContext:
readOnlyRootFilesystem: true
- name: consul
image: registry.../consul-image:0.0.10
env:
- name: SERVICE_NAME
value: java-kafka-rest-kafka-data-2
- name: SERVICE_ENVIRONMENT
value: development
- name: SERVICE_PORT
value: "7533"
- name: CONSUL1
valueFrom:
configMapKeyRef:
name: consul-config-...
key: node1
- name: CONSUL2
valueFrom:
configMapKeyRef:
name: consul-config-...
key: node2
- name: CONSUL3
valueFrom:
configMapKeyRef:
name: consul-config-...
key: node3
- name: CONSUL_ENCRYPT
valueFrom:
configMapKeyRef:
name: consul-config-...
key: encrypt
ports:
- containerPort: 8300
- containerPort: 8301
- containerPort: 8302
- containerPort: 8400
- containerPort: 8500
- containerPort: 8600
command: [ entrypoint, agent, -config-dir=/config, -join=$(CONSUL1), -join=$(CONSUL2), -join=$(CONSUL3), -encrypt=$(CONSUL_ENCRYPT) ]
terminationGracePeriodSeconds: 30
nodeSelector:
env: ...
对于那些有这个问题的人,我已经发现了我的问题的问题和解决方案。显然问题出在我的 service.yml
上,我的 targetPort 指向的端口与我在 docker 图像中打开的端口不同。确保 docker 图像中打开的端口连接到正确的端口。
希望对您有所帮助。
所以我在 k8s 上有 2 个类似的部署,它们从 GitLab 中提取相同的图像。显然,这导致我的第二次部署出现 CrashLoopBackOff
错误,而且我似乎无法连接到端口以检查我的 pod 的 /healthz
。记录 pod 显示 pod 收到中断信号,同时描述 pod 显示以下消息。
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
29m 29m 1 default-scheduler Normal Scheduled Successfully assigned java-kafka-rest-kafka-data-2-development-5c6f7f597-5t2mr to 172.18.14.110
29m 29m 1 kubelet, 172.18.14.110 Normal SuccessfulMountVolume MountVolume.SetUp succeeded for volume "default-token-m4m55"
29m 29m 1 kubelet, 172.18.14.110 spec.containers{consul} Normal Pulled Container image "..../consul-image:0.0.10" already present on machine
29m 29m 1 kubelet, 172.18.14.110 spec.containers{consul} Normal Created Created container
29m 29m 1 kubelet, 172.18.14.110 spec.containers{consul} Normal Started Started container
28m 28m 1 kubelet, 172.18.14.110 spec.containers{java-kafka-rest-development} Normal Killing Killing container with id docker://java-kafka-rest-development:Container failed liveness probe.. Container will be killed and recreated.
29m 28m 2 kubelet, 172.18.14.110 spec.containers{java-kafka-rest-development} Normal Created Created container
29m 28m 2 kubelet, 172.18.14.110 spec.containers{java-kafka-rest-development} Normal Started Started container
29m 27m 10 kubelet, 172.18.14.110 spec.containers{java-kafka-rest-development} Warning Unhealthy Readiness probe failed: Get http://10.5.59.35:7533/healthz: dial tcp 10.5.59.35:7533: getsockopt: connection refused
28m 24m 13 kubelet, 172.18.14.110 spec.containers{java-kafka-rest-development} Warning Unhealthy Liveness probe failed: Get http://10.5.59.35:7533/healthz: dial tcp 10.5.59.35:7533: getsockopt: connection refused
29m 19m 8 kubelet, 172.18.14.110 spec.containers{java-kafka-rest-development} Normal Pulled Container image "r..../java-kafka-rest:0.3.2-dev" already present on machine
24m 4m 73 kubelet, 172.18.14.110 spec.containers{java-kafka-rest-development} Warning BackOff Back-off restarting failed container
我尝试过重新部署不同映像下的部署,它似乎工作得很好。但是我认为这不会有效,因为图像始终相同。我该如何继续呢?
这是我的部署文件的样子:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: "java-kafka-rest-kafka-data-2-development"
labels:
repository: "java-kafka-rest"
project: "java-kafka-rest"
service: "java-kafka-rest-kafka-data-2"
env: "development"
spec:
replicas: 1
selector:
matchLabels:
repository: "java-kafka-rest"
project: "java-kafka-rest"
service: "java-kafka-rest-kafka-data-2"
env: "development"
template:
metadata:
labels:
repository: "java-kafka-rest"
project: "java-kafka-rest"
service: "java-kafka-rest-kafka-data-2"
env: "development"
release: "0.3.2-dev"
spec:
imagePullSecrets:
- name: ...
containers:
- name: java-kafka-rest-development
image: registry...../java-kafka-rest:0.3.2-dev
env:
- name: DEPLOYMENT_COMMIT_HASH
value: "0.3.2-dev"
- name: DEPLOYMENT_PORT
value: "7533"
livenessProbe:
httpGet:
path: /healthz
port: 7533
initialDelaySeconds: 30
timeoutSeconds: 1
readinessProbe:
httpGet:
path: /healthz
port: 7533
timeoutSeconds: 1
ports:
- containerPort: 7533
resources:
requests:
cpu: 0.5
memory: 6Gi
limits:
cpu: 3
memory: 10Gi
command:
- /envconsul
- -consul=127.0.0.1:8500
- -sanitize
- -upcase
- -prefix=java-kafka-rest/
- -prefix=java-kafka-rest/kafka-data-2
- java
- -jar
- /build/libs/java-kafka-rest-0.3.2-dev.jar
securityContext:
readOnlyRootFilesystem: true
- name: consul
image: registry.../consul-image:0.0.10
env:
- name: SERVICE_NAME
value: java-kafka-rest-kafka-data-2
- name: SERVICE_ENVIRONMENT
value: development
- name: SERVICE_PORT
value: "7533"
- name: CONSUL1
valueFrom:
configMapKeyRef:
name: consul-config-...
key: node1
- name: CONSUL2
valueFrom:
configMapKeyRef:
name: consul-config-...
key: node2
- name: CONSUL3
valueFrom:
configMapKeyRef:
name: consul-config-...
key: node3
- name: CONSUL_ENCRYPT
valueFrom:
configMapKeyRef:
name: consul-config-...
key: encrypt
ports:
- containerPort: 8300
- containerPort: 8301
- containerPort: 8302
- containerPort: 8400
- containerPort: 8500
- containerPort: 8600
command: [ entrypoint, agent, -config-dir=/config, -join=$(CONSUL1), -join=$(CONSUL2), -join=$(CONSUL3), -encrypt=$(CONSUL_ENCRYPT) ]
terminationGracePeriodSeconds: 30
nodeSelector:
env: ...
对于那些有这个问题的人,我已经发现了我的问题的问题和解决方案。显然问题出在我的 service.yml
上,我的 targetPort 指向的端口与我在 docker 图像中打开的端口不同。确保 docker 图像中打开的端口连接到正确的端口。
希望对您有所帮助。