Prometheus & Alert Manager 在将 EKS 版本更新到 1.16 后不断崩溃

Prometheus & Alert Manager keeps crashing after updating the EKS version to 1.16

prometheus-prometheus-kube-prometheus-prometheus-0 0/2 Terminating 0 4s alertmanager-prometheus-kube-prometheus-alertmanager-0 0/2 Terminating 0 10s

将 EKS 集群从 1.15 更新到 1.16 后,除了这两个 pods,其他一切正常,它们一直在终止并且无法初始化。因此,普罗米修斯监控不起作用。我在描述 pods.

时遇到以下错误
Error: failed to start container "prometheus": Error response from daemon: OCI runtime create failed: container_linux.go:362: creating new parent process caused: container_linux.go:1941: running lstat on namespace path "/proc/29271/ns/ipc" caused: lstat /proc/29271/ns/ipc: no such file or directory: unknown
Error: failed to start container "config-reloader": Error response from daemon: cannot join network of a non running container: 7e139521980afd13dad0162d6859352b0b2c855773d6d4062ee3e2f7f822a0b3
Error: cannot find volume "config" to mount into container "config-reloader"
Error: cannot find volume "config" to mount into container "prometheus"

这是我的部署 yaml 文件:

apiVersion: v1
kind: Pod
metadata:
  annotations:
    kubernetes.io/psp: eks.privileged
  creationTimestamp: "2021-04-30T16:39:14Z"
  deletionGracePeriodSeconds: 600
  deletionTimestamp: "2021-04-30T16:49:14Z"
  generateName: prometheus-prometheus-kube-prometheus-prometheus-
  labels:
    app: prometheus
    app.kubernetes.io/instance: prometheus-kube-prometheus-prometheus
    app.kubernetes.io/managed-by: prometheus-operator
    app.kubernetes.io/name: prometheus
    app.kubernetes.io/version: 2.26.0
    controller-revision-hash: prometheus-prometheus-kube-prometheus-prometheus-56d9fcf57
    operator.prometheus.io/name: prometheus-kube-prometheus-prometheus
    operator.prometheus.io/shard: "0"
    prometheus: prometheus-kube-prometheus-prometheus
    statefulset.kubernetes.io/pod-name: prometheus-prometheus-kube-prometheus-prometheus-0
  name: prometheus-prometheus-kube-prometheus-prometheus-0
  namespace: mo
  ownerReferences:
  - apiVersion: apps/v1
    blockOwnerDeletion: true
    controller: true
    kind: StatefulSet
    name: prometheus-prometheus-kube-prometheus-prometheus
    uid: 326a09f2-319c-449d-904a-1dd0019c6d80
  resourceVersion: "9337443"
  selfLink: /api/v1/namespaces/monitoring/pods/prometheus-prometheus-kube-prometheus-prometheus-0
  uid: e2be062f-749d-488e-a6cc-42ef1396851b
spec:
  containers:
  - args:
    - --web.console.templates=/etc/prometheus/consoles
    - --web.console.libraries=/etc/prometheus/console_libraries
    - --config.file=/etc/prometheus/config_out/prometheus.env.yaml
    - --storage.tsdb.path=/prometheus
    - --storage.tsdb.retention.time=10d
    - --web.enable-lifecycle
    - --storage.tsdb.no-lockfile
    - --web.external-url=http://prometheus-kube-prometheus-prometheus.monitoring:9090
    - --web.route-prefix=/
    image: quay.io/prometheus/prometheus:v2.26.0
    imagePullPolicy: IfNotPresent
    name: prometheus
    ports:
    - containerPort: 9090
      name: web
      protocol: TCP
    readinessProbe:
      failureThreshold: 120
      httpGet:
        path: /-/ready
        port: web
        scheme: HTTP
      periodSeconds: 5
      successThreshold: 1
      timeoutSeconds: 3
    resources: {}
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: FallbackToLogsOnError
    volumeMounts:
    - mountPath: /etc/prometheus/config_out
      name: config-out
      readOnly: true
    - mountPath: /etc/prometheus/certs
      name: tls-assets
      readOnly: true
    - mountPath: /prometheus
      name: prometheus-prometheus-kube-prometheus-prometheus-db
    - mountPath: /etc/prometheus/rules/prometheus-prometheus-kube-prometheus-prometheus-rulefiles-0
      name: prometheus-prometheus-kube-prometheus-prometheus-rulefiles-0
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: prometheus-kube-prometheus-prometheus-token-mh66q
      readOnly: true
  - args:
    - --listen-address=:8080
    - --reload-url=http://localhost:9090/-/reload
    - --config-file=/etc/prometheus/config/prometheus.yaml.gz
    - --config-envsubst-file=/etc/prometheus/config_out/prometheus.env.yaml
    - --watched-dir=/etc/prometheus/rules/prometheus-prometheus-kube-prometheus-prometheus-rulefiles-0
    command:
    - /bin/prometheus-config-reloader
    env:
    - name: POD_NAME
      valueFrom:
        fieldRef:
          apiVersion: v1
          fieldPath: metadata.name
    - name: SHARD
      value: "0"
    image: quay.io/prometheus-operator/prometheus-config-reloader:v0.47.0
    imagePullPolicy: IfNotPresent
    name: config-reloader
    ports:
    - containerPort: 8080
      name: reloader-web
      protocol: TCP
    resources:
      limits:
        cpu: 100m
        memory: 50Mi
      requests:
        cpu: 100m
        memory: 50Mi
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: FallbackToLogsOnError
    volumeMounts:
    - mountPath: /etc/prometheus/config
      name: config
    - mountPath: /etc/prometheus/config_out
      name: config-out
    - mountPath: /etc/prometheus/rules/prometheus-prometheus-kube-prometheus-prometheus-rulefiles-0
      name: prometheus-prometheus-kube-prometheus-prometheus-rulefiles-0
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: prometheus-kube-prometheus-prometheus-token-mh66q
      readOnly: true
  dnsPolicy: ClusterFirst
  enableServiceLinks: true
  hostname: prometheus-prometheus-kube-prometheus-prometheus-0
  nodeName: ip-10-1-49-45.ec2.internal
  priority: 0
  restartPolicy: Always
  schedulerName: default-scheduler
  securityContext:
    fsGroup: 2000
    runAsGroup: 2000
    runAsNonRoot: true
    runAsUser: 1000
  serviceAccount: prometheus-kube-prometheus-prometheus
  serviceAccountName: prometheus-kube-prometheus-prometheus
  subdomain: prometheus-operated
  terminationGracePeriodSeconds: 600
  tolerations:
  - effect: NoExecute
    key: node.kubernetes.io/not-ready
    operator: Exists
    tolerationSeconds: 300
  - effect: NoExecute
    key: node.kubernetes.io/unreachable
    operator: Exists
    tolerationSeconds: 300
  volumes:
  - name: config
    secret:
      defaultMode: 420
      secretName: prometheus-prometheus-kube-prometheus-prometheus
  - name: tls-assets
    secret:
      defaultMode: 420
      secretName: prometheus-prometheus-kube-prometheus-prometheus-tls-assets
  - emptyDir: {}
    name: config-out
  - configMap:
      defaultMode: 420
      name: prometheus-prometheus-kube-prometheus-prometheus-rulefiles-0
    name: prometheus-prometheus-kube-prometheus-prometheus-rulefiles-0
  - emptyDir: {}
    name: prometheus-prometheus-kube-prometheus-prometheus-db
  - name: prometheus-kube-prometheus-prometheus-token-mh66q
    secret:
      defaultMode: 420
      secretName: prometheus-kube-prometheus-prometheus-token-mh66q
status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: "2021-04-30T16:39:14Z"
    status: "True"
    type: PodScheduled
  phase: Pending
  qosClass: Burstable

如果有人需要知道答案,在我的例子中(上述情况)有 2 个普罗米修斯运算符 运行 在不同的不同命名空间中,1 个在默认和另一个监视命名空间中。所以我从默认命名空间中删除了那个,它解决了我的 pods 崩溃问题。