Kubelet - 工厂 "crio" 无法处理容器

Kubelet - Factory "crio" was unable to handle container

我尝试通过kubespray master 分支安装Kubernetes 1.21.1。它在代理服务器后面。我给crio环境和global环境填了http_proxy,https_proxy,no_proxy

大师1:192.168.33.33 大师2:192.168.33.34 大师3:192.168.33.35

HTTP_PROXY=http://squid1.example.it:3128
http_proxy=http://squid1.example.it:3128
no_proxy=localhost,localhost4,127.0.0.1,.example.it,192.168.33.33,192.168.33.34,192.168.33.35,192.168.33.32,192.168.33.31,10.233.0.0/18,10.233.64.0/18,cz-itops-m1,cz-itops-m2,cz-itops-m3
NO_PROXY=localhost,localhost4,127.0.0.1,.example.it,192.168.33.33,192.168.33.34,192.168.33.35,192.168.33.32,192.168.33.31,10.233.0.0/18,10.233.64.0/18,cz-itops-m1,cz-itops-m2,cz-itops-m3

我无法使用 cri-o 运行时启动 kube-api 服务器,kubelet 正在重启。当 kubeadm init 尝试与 kube-api 建立连接时,initiaze master 安装失败。我的连接被拒绝了。

我搜索了 Google 我没有找到解决我的问题的答案。

有人可以帮我吗?

Kubelet 日志

Jun 23 14:53:19 cz-itops-m2 kubelet[22460]: I0623 14:53:18.848422   22460 manager.go:917] ignoring container "/system.slice/run-utsns-cc759b9b\x2d6f5b\x2d4c4a\x2d87e6\x2db745ac9613dd.mount"
Jun 23 14:53:19 cz-itops-m2 kubelet[22460]: I0623 14:53:18.848426   22460 factory.go:220] Factory "containerd" was unable to handle container "/system.slice/run-utsns-af812bcd\x2d22ab\x2d4629\x2d82e2\x2d7270f669cc66.mount"
Jun 23 14:53:19 cz-itops-m2 kubelet[22460]: I0623 14:53:18.848430   22460 factory.go:220] Factory "crio" was unable to handle container "/system.slice/run-utsns-af812bcd\x2d22ab\x2d4629\x2d82e2\x2d7270f669cc66.mount"
Jun 23 14:53:19 cz-itops-m2 kubelet[22460]: I0623 14:53:18.848435   22460 factory.go:213] Factory "systemd" can handle container "/system.slice/run-utsns-af812bcd\x2d22ab\x2d4629\x2d82e2\x2d7270f669cc66.mount", but ignoring.
Jun 23 14:53:19 cz-itops-m2 kubelet[22460]: I0623 14:53:18.848442   22460 manager.go:917] ignoring container "/system.slice/run-utsns-af812bcd\x2d22ab\x2d4629\x2d82e2\x2d7270f669cc66.mount"
Jun 23 14:53:19 cz-itops-m2 kubelet[22460]: I0623 14:53:18.848447   22460 factory.go:220] Factory "containerd" was unable to handle container "/system.slice/run-utsns-d12d86b2\x2d6bb1\x2d4068\x2d83e6\x2d394fdecf43f0.mount"
Jun 23 14:53:19 cz-itops-m2 kubelet[22460]: I0623 14:53:18.848451   22460 factory.go:220] Factory "crio" was unable to handle container "/system.slice/run-utsns-d12d86b2\x2d6bb1\x2d4068\x2d83e6\x2d394fdecf43f0.mount"
Jun 23 14:53:19 cz-itops-m2 kubelet[22460]: I0623 14:53:18.848456   22460 factory.go:213] Factory "systemd" can handle container "/system.slice/run-utsns-d12d86b2\x2d6bb1\x2d4068\x2d83e6\x2d394fdecf43f0.mount", but ignoring.
Jun 23 14:53:19 cz-itops-m2 kubelet[22460]: I0623 14:53:18.848462   22460 manager.go:917] ignoring container "/system.slice/run-utsns-d12d86b2\x2d6bb1\x2d4068\x2d83e6\x2d394fdecf43f0.mount"
Jun 23 14:53:19 cz-itops-m2 kubelet[22460]: I0623 14:53:18.848467   22460 factory.go:220] Factory "containerd" was unable to handle container "/system.slice/run-utsns-e28bde38\x2d6a1e\x2d4952\x2dba63\x2df5b4d366d130.mount"

Jun 23 15:23:28 cz-itops-m2 kubelet[51252]: E0623 15:23:28.584748   51252 kubelet.go:2291] "Error getting node" err="node \"cz-itops-m2\" not found"
Jun 23 15:23:28 cz-itops-m2 kubelet[51252]: E0623 15:23:28.685707   51252 kubelet.go:2291] "Error getting node" err="node \"cz-itops-m2\" not found"
Jun 23 15:23:28 cz-itops-m2 kubelet[51252]: E0623 15:23:28.786381   51252 kubelet.go:2291] "Error getting node" err="node \"cz-itops-m2\" not found"

un 23 15:23:30 cz-itops-m2 kubelet[51252]: I0623 15:23:30.137671   51252 prober.go:173] "HTTP-Probe Host" scheme="http" host="127.0.0.1" port=2381 path="/health"
Jun 23 15:23:30 cz-itops-m2 kubelet[51252]: I0623 15:23:30.137725   51252 prober.go:176] "HTTP-Probe Headers" headers=map[]
Jun 23 15:23:30 cz-itops-m2 kubelet[51252]: I0623 15:23:30.139467   51252 http.go:134] Probe succeeded for http://127.0.0.1:2381/health, Response: {200 OK 200 HTTP/1.1 1 1 map[Content-Length:[17] Content-Type:[text/plain; charset=utf-8] Date:[Wed, 23 Jun 2021 13:23:30 GMT]] 0xc0013cb660 17 [] true false map[] 0xc001640300 <nil>}
Jun 23 15:23:30 cz-itops-m2 kubelet[51252]: I0623 15:23:30.139550   51252 prober.go:125] "Probe succeeded" probeType="Liveness" pod="kube-system/etcd-cz-itops-m2" podUID=1d7187d9accf1de2c438723a0b4b7229 containerName="etcd"
Jun 23 15:23:30 cz-itops-m2 kubelet[51252]: I0623 15:23:30.156803   51252 prober.go:173] "HTTP-Probe Host" scheme="https" host="192.168.33.34" port=10257 path="/healthz"
Jun 23 15:23:30 cz-itops-m2 kubelet[51252]: I0623 15:23:30.156848   51252 prober.go:176] "HTTP-Probe Headers" headers=map[]
Jun 23 15:23:30 cz-itops-m2 kubelet[51252]: I0623 15:23:30.167119   51252 http.go:134] Probe succeeded for https://192.168.33.34:10257/healthz, Response: {200 OK 200 HTTP/2.0 2 0 map[Cache-Control:[no-cache, private] Content-Length:[2] Content-Type:[text/plain; charset=utf-8] Date:[Wed, 23 Jun 2021 13:23:30 GMT] X-Content-Type-Options:[nosniff]] 0xc000295cc0 2 [] false false map[] 0xc000c90100 0xc0017b2210}
Jun 23 15:23:30 cz-itops-m2 kubelet[51252]: I0623 15:23:30.167199   51252 prober.go:125] "Probe succeeded" probeType="Liveness" pod="kube-system/kube-controller-manager-cz-itops-m2" podUID=f0be62f929531d12c573191d7cc11439 containerName="kube-controller-manager"


Jun 23 15:27:45 cz-itops-m2 kubelet[56261]: I0623 15:27:45.549719   56261 round_trippers.go:454] GET https://192.168.33.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcz-itops-m2&limit=500&resourceVersion=0  in 0 milliseconds
Jun 23 15:27:45 cz-itops-m2 kubelet[56261]: I0623 15:27:45.549734   56261 round_trippers.go:460] Response Headers:
Jun 23 15:27:45 cz-itops-m2 kubelet[56261]: E0623 15:27:45.549816   56261 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.33.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcz-itops-m2&limit=500&resourceVersion=0": dial tcp 192.168.33.34:6443: connect: connection refused
Jun 23 15:27:46 cz-itops-m2 kubelet[56261]: I0623 15:27:46.151133   56261 reflector.go:255] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:134
Jun 23 15:27:46 cz-itops-m2 kubelet[56261]: I0623 15:27:46.151381   56261 round_trippers.go:435] curl -k -v -XGET  -H "User-Agent: kubelet/v1.21.1 (linux/amd64) kubernetes/5e58841" -H "Accept: application/vnd.kubernetes.protobuf,application/json" 'https://192.168.33.34:6443/api/v1/services?limit=500&resourceVersion=0'
Jun 23 15:27:46 cz-itops-m2 kubelet[56261]: I0623 15:27:46.151694   56261 round_trippers.go:454] GET https://192.168.33.34:6443/api/v1/services?limit=500&resourceVersion=0  in 0 milliseconds
Jun 23 15:27:46 cz-itops-m2 kubelet[56261]: I0623 15:27:46.151713   56261 round_trippers.go:460] Response Headers:
Jun 23 15:27:46 cz-itops-m2 kubelet[56261]: E0623 15:27:46.151854   56261 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.33.34:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.33.34:6443: connect: connection refused

● kubelet.service - Kubernetes Kubelet Server
   Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: disabled)
   Active: active (running) since Wed 2021-06-23 15:33:00 CEST; 6s ago
     Docs: https://github.com/GoogleCloudPlatform/kubernetes
 Main PID: 62056 (kubelet)
    Tasks: 10
   Memory: 23.9M
   CGroup: /system.slice/kubelet.service
           └─62056 /usr/local/bin/kubelet --logtostderr=true --v=55555 --node-ip=192.168.33.34 --hostname-override=cz-itops-m2 --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/etc/kubernetes/kubelet-config.yaml --kubeconfig=/etc/kubernetes/kubelet.conf --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --runtime-cgroups=/systemd/system.slice --network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin

Jun 23 15:33:00 cz-itops-m2 kubelet[62056]: I0623 15:33:00.398843   62056 flags.go:59] FLAG: --cpu-cfs-quota="true"
Jun 23 15:33:00 cz-itops-m2 kubelet[62056]: I0623 15:33:00.398846   62056 flags.go:59] FLAG: --cpu-cfs-quota-period="100ms"
Jun 23 15:33:00 cz-itops-m2 kubelet[62056]: I0623 15:33:00.398850   62056 flags.go:59] FLAG: --cpu-manager-policy="none"
Jun 23 15:33:00 cz-itops-m2 kubelet[62056]: I0623 15:33:00.398853   62056 flags.go:59] FLAG: --cpu-manager-reconcile-period="10s"
Jun 23 15:33:00 cz-itops-m2 kubelet[62056]: I0623 15:33:00.398856   62056 flags.go:59] FLAG: --docker="unix:///var/run/docker.sock"
Jun 23 15:33:00 cz-itops-m2 kubelet[62056]: I0623 15:33:00.398860   62056 flags.go:59] FLAG: --docker-endpoint="unix:///var/run/docker.sock"
Jun 23 15:33:00 cz-itops-m2 kubelet[62056]: I0623 15:33:00.398864   62056 flags.go:59] FLAG: --docker-env-metadata-whitelist=""
Jun 23 15:33:00 cz-itops-m2 kubelet[62056]: I0623 15:33:00.398867   62056 flags.go:59] FLAG: --docker-only="false"
Jun 23 15:33:00 cz-itops-m2 kubelet[62056]: I0623 15:33:00.398870   62056 flags.go:59] FLAG: --docker-root="/var/lib/docker"
Jun 23 15:33:00 cz-itops-m2 kubelet[62056]: I0623 15:33:00.398873   62056 flags.go:59] FLAG: --docker-tls="false"

kubeadm 初始化命令

/usr/local/bin/kubeadm init --v=5 --config=/etc/kubernetes/kubeadm-config.yaml --ignore-preflight-errors=all --skip-phases=addon/coredns --upload-certs

kubeadm 输出

[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 5m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.

        Unfortunately, an error has occurred:
                timed out waiting for the condition

        This error is likely caused by:
                - The kubelet is not running
                - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

        If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
                - 'systemctl status kubelet'
                - 'journalctl -xeu kubelet'

        Additionally, a control plane component may have crashed or exited when started by the container runtime.
        To troubleshoot, list all containers using your preferred container runtimes CLI.

        Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
                - 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
                Once you have found the failing container, you can inspect its logs with:
                - 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'

couldn't initialize a Kubernetes cluster
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init.runWaitControlPlanePhase
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init/waitcontrolplane.go:114
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:234
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:421
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:207
k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdInit.func1
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/init.go:152
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:850
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:958
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:895
k8s.io/kubernetes/cmd/kubeadm/app.Run
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:50
main.main
        _output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:25
runtime.main
        /usr/local/go/src/runtime/proc.go:225
runtime.goexit
        /usr/local/go/src/runtime/asm_amd64.s:1371
error execution phase wait-control-plane
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:235
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:421
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:207
k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdInit.func1
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/init.go:152
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:850
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:958
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:895
k8s.io/kubernetes/cmd/kubeadm/app.Run
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:50
main.main
        _output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:25
runtime.main
        /usr/local/go/src/runtime/proc.go:225
runtime.goexit
        /usr/local/go/src/runtime/asm_amd64.s:1371

Kubelet 配置 - /var/lib/kubelet/config.yaml

apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
  anonymous:
    enabled: false
  webhook:
    cacheTTL: 0s
    enabled: true
  x509:
    clientCAFile: /etc/kubernetes/ssl/ca.crt
authorization:
  mode: Webhook
  webhook:
    cacheAuthorizedTTL: 0s
    cacheUnauthorizedTTL: 0s
cgroupDriver: systemd
clusterDNS:
- 10.233.0.10
clusterDomain: na.tipsport.it
cpuManagerReconcilePeriod: 0s
evictionPressureTransitionPeriod: 0s
fileCheckFrequency: 0s
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 0s
imageMinimumGCAge: 0s
kind: KubeletConfiguration
logging: {}
nodeStatusReportFrequency: 0s
nodeStatusUpdateFrequency: 0s
rotateCertificates: true
runtimeRequestTimeout: 0s
shutdownGracePeriod: 0s
shutdownGracePeriodCriticalPods: 0s
staticPodPath: /etc/kubernetes/manifests
streamingConnectionIdleTimeout: 0s
syncFrequency: 0s
volumeStatsAggPeriod: 0s

/etc/kubernetes/manifests/kube-apiserver.yaml

apiVersion: v1
kind: Pod
metadata:
  annotations:
    kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.33.34:6443
  creationTimestamp: null
  labels:
    component: kube-apiserver
    tier: control-plane
  name: kube-apiserver
  namespace: kube-system
spec:
  containers:
  - command:
    - kube-apiserver
    - --advertise-address=192.168.33.34
    - --allow-privileged=true
    - --anonymous-auth=True
    - --apiserver-count=3
    - --authorization-mode=Node,RBAC
    - --bind-address=0.0.0.0
    - --client-ca-file=/etc/kubernetes/ssl/ca.crt
    - --default-not-ready-toleration-seconds=300
    - --default-unreachable-toleration-seconds=300
    - --enable-admission-plugins=NodeRestriction
    - --enable-aggregator-routing=False
    - --enable-bootstrap-token-auth=true
    - --endpoint-reconciler-type=lease
    - --etcd-cafile=/etc/kubernetes/ssl/etcd/ca.crt
    - --etcd-certfile=/etc/kubernetes/ssl/apiserver-etcd-client.crt
    - --etcd-keyfile=/etc/kubernetes/ssl/apiserver-etcd-client.key
    - --etcd-servers=https://127.0.0.1:2379
    - --event-ttl=1h0m0s
    - --insecure-port=0
    - --kubelet-client-certificate=/etc/kubernetes/ssl/apiserver-kubelet-client.crt
    - --kubelet-client-key=/etc/kubernetes/ssl/apiserver-kubelet-client.key
    - --kubelet-preferred-address-types=InternalDNS,InternalIP,Hostname,ExternalDNS,ExternalIP
    - --profiling=False
    - --proxy-client-cert-file=/etc/kubernetes/ssl/front-proxy-client.crt
    - --proxy-client-key-file=/etc/kubernetes/ssl/front-proxy-client.key
    - --request-timeout=1m0s
    - --requestheader-allowed-names=front-proxy-client
    - --requestheader-client-ca-file=/etc/kubernetes/ssl/front-proxy-ca.crt
    - --requestheader-extra-headers-prefix=X-Remote-Extra-
    - --requestheader-group-headers=X-Remote-Group
    - --requestheader-username-headers=X-Remote-User
    - --secure-port=6443
    - --service-account-issuer=https://kubernetes.default.svc.na.tipsport.it
    - --service-account-key-file=/etc/kubernetes/ssl/sa.pub
    - --service-account-signing-key-file=/etc/kubernetes/ssl/sa.key
    - --service-cluster-ip-range=10.233.0.0/18
    - --service-node-port-range=30000-32767
    - --storage-backend=etcd3
    - --tls-cert-file=/etc/kubernetes/ssl/apiserver.crt
    - --tls-private-key-file=/etc/kubernetes/ssl/apiserver.key
    env:
    - name: NO_PROXY
      value: localhost,localhost4,127.0.0.1,.tipsport.it,192.168.33.33,192.168.33.34,192.168.33.35,192.168.33.32,192.168.33.31,10.233.0.0/18,10.233.64.0/18,cz-itops-m1,cz-itops-m2,cz-itops-m3
    - name: http_proxy
      value: http://squid1.tipsport.it:3128
    - name: HTTPS_PROXY
      value: http://squid1.tipsport.it:3128
    - name: https_proxy
      value: http://squid1.tipsport.it:3128
    - name: no_proxy
      value: localhost,localhost4,127.0.0.1,.tipsport.it,192.168.33.33,192.168.33.34,192.168.33.35,192.168.33.32,192.168.33.31,10.233.0.0/18,10.233.64.0/18,cz-itops-m1,cz-itops-m2,cz-itops-m3
    - name: HTTP_PROXY
      value: http://squid1.tipsport.it:3128
    image: k8s.gcr.io/kube-apiserver:v1.21.1
    imagePullPolicy: IfNotPresent
    livenessProbe:
      failureThreshold: 8
      httpGet:
        host: 192.168.33.34
        path: /livez
        port: 6443
        scheme: HTTPS
      initialDelaySeconds: 10
      periodSeconds: 10
      timeoutSeconds: 15
    name: kube-apiserver
    readinessProbe:
      failureThreshold: 3
      httpGet:
        host: 192.168.33.34
        path: /readyz
        port: 6443
        scheme: HTTPS
      periodSeconds: 1
      timeoutSeconds: 15
    resources:
      requests:
        cpu: 250m
    startupProbe:
      failureThreshold: 30
      httpGet:
        host: 192.168.33.34
        path: /livez
        port: 6443
        scheme: HTTPS
      initialDelaySeconds: 10
      periodSeconds: 10
      timeoutSeconds: 15
    volumeMounts:
    - mountPath: /etc/ssl/certs
      name: ca-certs
      readOnly: true
    - mountPath: /etc/pki
      name: etc-pki
      readOnly: true
    - mountPath: /etc/pki/ca-trust
      name: etc-pki-ca-trust
      readOnly: true
    - mountPath: /etc/pki/tls
      name: etc-pki-tls
      readOnly: true
    - mountPath: /etc/kubernetes/ssl
      name: k8s-certs
      readOnly: true
  hostNetwork: true
  priorityClassName: system-node-critical
  volumes:
  - hostPath:
      path: /etc/ssl/certs
      type: DirectoryOrCreate
    name: ca-certs
  - hostPath:
      path: /etc/pki
      type: DirectoryOrCreate
    name: etc-pki
  - hostPath:
      path: /etc/pki/ca-trust
      type: ""
    name: etc-pki-ca-trust
  - hostPath:
      path: /etc/pki/tls
      type: ""
    name: etc-pki-tls
  - hostPath:
      path: /etc/kubernetes/ssl
      type: DirectoryOrCreate
    name: k8s-certs
status: {}
[cz-itops-m2(192.168.33.34) ~]# crictl --runtime-endpoint /var/run/crio/crio.sock ps -a
I0623 15:51:33.398777   81236 util_unix.go:103] "Using this endpoint is deprecated, please consider using full URL format" endpoint="/var/run/crio/crio.sock" URL="unix:///var/run/crio/crio.sock"
CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID
75cdd4d1557a3       771ffcf9ca634e37cbd3202fd86bd7e2df48ecba4067d1992541bfa00e88a9bb   20 seconds ago       Running             kube-apiserver            81                  2700619d64cd8
b17394a27c5bd       771ffcf9ca634e37cbd3202fd86bd7e2df48ecba4067d1992541bfa00e88a9bb   About a minute ago   Exited              kube-apiserver            80                  2700619d64cd8
fd0cf2d7612ed       e16544fd47b02fea6201a1c39f0ffae170968b6dd48ac2643c4db3cab0011ed4   About an hour ago    Running             kube-controller-manager   2                   890ba9948e61d
b43902ed63f8a       a4183b88f6e65972c4b176b43ca59de31868635a7e43805f4c6e26203de1742f   About an hour ago    Running             kube-scheduler            2                   ed70ac01a9292
72cbe05accf99       a4183b88f6e65972c4b176b43ca59de31868635a7e43805f4c6e26203de1742f   About an hour ago    Exited              kube-scheduler            1                   ed70ac01a9292
188e83ace11f6       e16544fd47b02fea6201a1c39f0ffae170968b6dd48ac2643c4db3cab0011ed4   About an hour ago    Exited              kube-controller-manager   1                   890ba9948e61d
39af1f7134c6c       771ffcf9ca634e37cbd3202fd86bd7e2df48ecba4067d1992541bfa00e88a9bb   About an hour ago    Exited              kube-apiserver            10                  96e499dceadb9
29b005d4ee49d       e16544fd47b02fea6201a1c39f0ffae170968b6dd48ac2643c4db3cab0011ed4   About an hour ago    Exited              kube-controller-manager   1                   6f2ae6f00c016
d8350d394c87f       a4183b88f6e65972c4b176b43ca59de31868635a7e43805f4c6e26203de1742f   About an hour ago    Exited              kube-scheduler            1                   efe947983d387
02dc663715f28       771ffcf9ca634e37cbd3202fd86bd7e2df48ecba4067d1992541bfa00e88a9bb   About an hour ago    Exited              kube-apiserver            0                   14fb9e0dccee5
b778962034c37       d1985d4043858c43893f46e699d39a7640156962f1ccf0a2d64b8c8d621f00fc   About an hour ago    Running             etcd                      0                   65383591efff3

问题是在本地主机上允许 IPv6。它正在侦听 IPv6 本地主机,如下所示。

E0623 15:44:27.039563       1 reflector.go:138] k8s.io/kubernetes/pkg/controlplane/controller/clusterauthenticationtrust/cluster_authentication_trust_controller.go:444: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://[::1]:6443/api/v1/namespaces/kube-system/configmaps?limit=500&resourceVersion=0": dial tcp [::1]:6443: i/o timeout

运行 这用于从 kube api.

获取日志
crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a

然后

crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs [id container]

只要确保你在 /etc/sysctl.conf 中有这个。不允许使用 IPv6。

net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
sysctl -p