Kubectl 在没有命名空间时删除 tls

Kubectl delete tls when no namspace

已删除的节点上有一个命名空间“sandbox”,但仍然存在对证书“echo-tls”的挑战。 但我无法再访问沙箱命名空间来删除​​此证书。 谁能帮我删除这个资源?

这是证书管理器的日志:

Found status change for Certificate "echo-tls" condition "Ready": "True" -> "False"; setting lastTransitionTime to...

cert-manager/controller/CertificateReadiness "msg"="re-queuing item due to error processing" "error"="Operation cannot be fulfilled on certificates.cert-manager.io \"echo-tls\": StorageError: invalid object, Code: 4, Key: /cert-manager.io/certificates/sandbox/echo-tls, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: ..., UID in object meta: " "key"="sandbox/echo-tls"

重新启动 pod cert-manager 后,这里是日志:

cert-manager/controller/certificaterequests/handleOwnedResource "msg"="error getting referenced owning resource" "error"="certificaterequest.cert-manager.io \"echo-tls-bkmm8\" not found" "related_resource_kind"="CertificateRequest" "related_resource_name"="echo-tls-bkmm8" "related_resource_namespace"="sandbox" "resource_kind"="Order" "resource_name"="echo-tls-bkmm8-1177139468" "resource_namespace"="sandbox" "resource_version"="v1"

cert-manager/controller/orders "msg"="re-queuing item due to error processing" "error"="ACME client for issuer not initialised/available" "key"="sandbox/echo-tls-dwpt4-1177139468"

然后是和之前一样的日志

发行人:

apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: letsencrypt-prod
spec:
  acme:
    email: ***
    server: https://acme-v02.api.letsencrypt.org/directory
    privateKeySecretRef:
      name: letsencrypt-prod
    solvers:
    - http01:
        ingress: {}

部署配置:

---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: <APP_NAME>
  annotations:
    kubernetes.io/tls-acme: "true"
    kubernetes.io/ingress.class: nginx-<ENV>
    acme.cert-manager.io/http01-ingress-class: nginx-<ENV>
    cert-manager.io/cluster-issuer: "letsencrypt-prod"
spec:
  tls:
  - hosts:
    - ***.fr
    secretName: <APP_NAME>-tls
  rules:
  - host: ***.fr
    http:
      paths:
      - backend:
          serviceName: <APP_NAME>
          servicePort: 80
.k8s_config: &k8s_config
  before_script:
    - export HOME=/tmp
    - export K8S_NAMESPACE="${APP_NAME}"
    - kubectl config set-cluster k8s --server="${K8S_SERVER}"
    - kubectl config set clusters.k8s.certificate-authority-data ${K8S_CA_DATA}
    - kubectl config set-credentials default --token="${K8S_USER_TOKEN}"
    - kubectl config set-context default --cluster=k8s --user=default --namespace=default
    - kubectl config set-context ${K8S_NAMESPACE} --cluster=k8s --user=default --namespace=${K8S_NAMESPACE}
    - kubectl config use-context default
    - if [ -z `kubectl get namespace ${K8S_NAMESPACE} --no-headers --output=go-template={{.metadata.name}} 2>/dev/null` ]; then kubectl create namespace ${K8S_NAMESPACE}; fi
    - if [ -z `kubectl --namespace=${K8S_NAMESPACE} get secret *** --no-headers --output=go-template={{.metadata.name}} 2>/dev/null` ]; then kubectl get secret *** --output yaml | sed "s/namespace:\ default/namespace:\ ${K8S_NAMESPACE}/" | kubectl create -f - ; fi
    - kubectl config use-context ${K8S_NAMESPACE}
  1. 通常证书存储在 Kubernete 秘密中:https://kubernetes.io/docs/concepts/configuration/secret/#tls-secrets. You can retrieve secrets using kubectl get secrets --all-namespaces. You can also check which secrets are used by a given pod by checking its yaml description: kubectl get pods -n <pod-namespace> -o yaml (additional informations: https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets-as-files-from-a-pod)
  2. 命名空间是集群范围内的,它不位于任何节点上。所以删除节点不会删除任何命名空间。
  3. 如果以上曲目不符合您的需要,能否请您提供一些 yaml 文件和一些命令行指令,以便重现问题?

终于在这个星期天,证书管理器停止了对旧 tls 的挑战,没有采取任何其他措施。