kubernetes 的工作节点上没有临时存储资源

There is no ephemeral-storage resource on worker node of kubernetes

我尝试使用 arm64 arch 在船上设置 kubernetes 的工作节点。 此工作节点未从 NotReady 状态更改为 Ready 状态。

我使用以下命令检查了条件日志:

$ kubectl describe nodes

...
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  MemoryPressure   False   Wed, 02 Dec 2020 14:37:46 +0900   Wed, 02 Dec 2020 14:34:35 +0900   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Wed, 02 Dec 2020 14:37:46 +0900   Wed, 02 Dec 2020 14:34:35 +0900   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Wed, 02 Dec 2020 14:37:46 +0900   Wed, 02 Dec 2020 14:34:35 +0900   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            False   Wed, 02 Dec 2020 14:37:46 +0900   Wed, 02 Dec 2020 14:34:35 +0900   KubeletNotReady              [container runtime status check may not have completed yet, runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized, missing node capacity for resources: ephemeral-storage]
...

Capacity:
  cpu:     8
  memory:  7770600Ki
  pods:    110
Allocatable:
  cpu:     8
  memory:  7668200Ki
  pods:    110
...

这个工作节点似乎没有临时存储资源,所以 此日志似乎已创建

[容器运行时状态检查可能尚未完成,运行时网络未就绪:NetworkReady=false reason:NetworkPluginNotReady message:docker:网络插件未就绪:cni 配置未初始化,缺少节点容量资源:临时存储]

但是根文件系统安装在 / 上,如下所示,

$ df
Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/root             23602256   6617628  15945856  30% /
devtmpfs               3634432         0   3634432   0% /dev
tmpfs                  3885312         0   3885312   0% /dev/shm
tmpfs                  3885312    100256   3785056   3% /run
tmpfs                  3885312         0   3885312   0% /sys/fs/cgroup
tmpfs                   524288     25476    498812   5% /tmp
tmpfs                   524288       212    524076   1% /var/volatile
tmpfs                   777060         0    777060   0% /run/user/1000
/dev/sde4               122816     49088     73728  40% /firmware
/dev/sde5                65488       608     64880   1% /bt_firmware
/dev/sde7                28144     20048      7444  73% /dsp

如何检测 kubernetes 工作节点上的临时存储资源?

=========================================== ==========================

我添加了 $ kubectl get nodes 和 $ kubectl describe nodes 的完整日志

$ kubectl get nodes
NAME             STATUS     ROLES    AGE     VERSION
raas-linux       Ready      master   6m25s   v1.19.4
robot-dd9f6aaa   NotReady   <none>   5m16s   v1.16.2-dirty
$
$ kubectl describe nodes
Name:               raas-linux
Roles:              master
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=raas-linux
                    kubernetes.io/os=linux
                    node-role.kubernetes.io/master=
Annotations:        flannel.alpha.coreos.com/backend-data: {"VNI":1,"VtepMAC":"a6:a1:0b:43:38:29"}
                    flannel.alpha.coreos.com/backend-type: vxlan
                    flannel.alpha.coreos.com/kube-subnet-manager: true
                    flannel.alpha.coreos.com/public-ip: 192.168.3.106
                    kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Fri, 04 Dec 2020 09:54:49 +0900
Taints:             node-role.kubernetes.io/master:NoSchedule
Unschedulable:      false
Lease:
  HolderIdentity:  raas-linux
  AcquireTime:     <unset>
  RenewTime:       Fri, 04 Dec 2020 10:00:19 +0900
Conditions:
  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----                 ------  -----------------                 ------------------                ------                       -------
  NetworkUnavailable   False   Fri, 04 Dec 2020 09:55:14 +0900   Fri, 04 Dec 2020 09:55:14 +0900   FlannelIsUp                  Flannel is running on this node
  MemoryPressure       False   Fri, 04 Dec 2020 09:55:19 +0900   Fri, 04 Dec 2020 09:54:45 +0900   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure         False   Fri, 04 Dec 2020 09:55:19 +0900   Fri, 04 Dec 2020 09:54:45 +0900   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure          False   Fri, 04 Dec 2020 09:55:19 +0900   Fri, 04 Dec 2020 09:54:45 +0900   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready                True    Fri, 04 Dec 2020 09:55:19 +0900   Fri, 04 Dec 2020 09:55:19 +0900   KubeletReady                 kubelet is posting ready status. AppArmor enabled
Addresses:
  InternalIP:  192.168.3.106
  Hostname:    raas-linux
Capacity:
  cpu:                8
  ephemeral-storage:  122546800Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             8066548Ki
  pods:               110
Allocatable:
  cpu:                8
  ephemeral-storage:  112939130694
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             7964148Ki
  pods:               110
System Info:
  Machine ID:                 5aa3b32d7e9e409091929e7cba2d558b
  System UUID:                a930a228-a79a-11e5-9e9a-147517224400
  Boot ID:                    4e6dd5d2-bcc4-433b-8c4d-df56c33a9442
  Kernel Version:             5.4.0-53-generic
  OS Image:                   Ubuntu 18.04.5 LTS
  Operating System:           linux
  Architecture:               amd64
  Container Runtime Version:  docker://19.3.10
  Kubelet Version:            v1.19.4
  Kube-Proxy Version:         v1.19.4
PodCIDR:                      10.244.0.0/24
PodCIDRs:                     10.244.0.0/24
Non-terminated Pods:          (8 in total)
  Namespace                   Name                                  CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
  ---------                   ----                                  ------------  ----------  ---------------  -------------  ---
  kube-system                 coredns-f9fd979d6-h7hd5               100m (1%)     0 (0%)      70Mi (0%)        170Mi (2%)     5m9s
  kube-system                 coredns-f9fd979d6-hbkbl               100m (1%)     0 (0%)      70Mi (0%)        170Mi (2%)     5m9s
  kube-system                 etcd-raas-linux                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m20s
  kube-system                 kube-apiserver-raas-linux             250m (3%)     0 (0%)      0 (0%)           0 (0%)         5m20s
  kube-system                 kube-controller-manager-raas-linux    200m (2%)     0 (0%)      0 (0%)           0 (0%)         5m20s
  kube-system                 kube-flannel-ds-k8b2d                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      5m9s
  kube-system                 kube-proxy-wgn4l                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m9s
  kube-system                 kube-scheduler-raas-linux             100m (1%)     0 (0%)      0 (0%)           0 (0%)         5m20s
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests    Limits
  --------           --------    ------
  cpu                850m (10%)  100m (1%)
  memory             190Mi (2%)  390Mi (5%)
  ephemeral-storage  0 (0%)      0 (0%)
  hugepages-1Gi      0 (0%)      0 (0%)
  hugepages-2Mi      0 (0%)      0 (0%)
Events:
  Type    Reason                   Age    From        Message
  ----    ------                   ----   ----        -------
  Normal  Starting                 5m20s  kubelet     Starting kubelet.
  Normal  NodeHasSufficientMemory  5m20s  kubelet     Node raas-linux status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    5m20s  kubelet     Node raas-linux status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientPID     5m20s  kubelet     Node raas-linux status is now: NodeHasSufficientPID
  Normal  NodeAllocatableEnforced  5m20s  kubelet     Updated Node Allocatable limit across pods
  Normal  Starting                 5m8s   kube-proxy  Starting kube-proxy.
  Normal  NodeReady                5m     kubelet     Node raas-linux status is now: NodeReady


Name:               robot-dd9f6aaa
Roles:              <none>
Labels:             beta.kubernetes.io/arch=arm64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=arm64
                    kubernetes.io/hostname=robot-dd9f6aaa
                    kubernetes.io/os=linux
Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Fri, 04 Dec 2020 09:55:58 +0900
Taints:             node.kubernetes.io/not-ready:NoExecute
                    node.kubernetes.io/not-ready:NoSchedule
Unschedulable:      false
Lease:
  HolderIdentity:  robot-dd9f6aaa
  AcquireTime:     <unset>
  RenewTime:       Fri, 04 Dec 2020 10:00:16 +0900
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  MemoryPressure   False   Fri, 04 Dec 2020 09:55:58 +0900   Fri, 04 Dec 2020 09:55:58 +0900   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Fri, 04 Dec 2020 09:55:58 +0900   Fri, 04 Dec 2020 09:55:58 +0900   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Fri, 04 Dec 2020 09:55:58 +0900   Fri, 04 Dec 2020 09:55:58 +0900   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            False   Fri, 04 Dec 2020 09:55:58 +0900   Fri, 04 Dec 2020 09:55:58 +0900   KubeletNotReady              [container runtime status check may not have completed yet, runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized, missing node capacity for resources: ephemeral-storage]
Addresses:
  InternalIP:  192.168.3.102
  Hostname:    robot-dd9f6aaa
Capacity:
  cpu:     8
  memory:  7770620Ki
  pods:    110
Allocatable:
  cpu:     8
  memory:  7668220Ki
  pods:    110
System Info:
  Machine ID:                 de6c58c435a543de8e13ce6a76477fa0
  System UUID:                de6c58c435a543de8e13ce6a76477fa0
  Boot ID:                    d0999dd7-ab7d-4459-b0cd-9b25f5a50ae4
  Kernel Version:             4.9.103-sda845-smp
  OS Image:                   Kairos - Smart Machine Platform 1.0
  Operating System:           linux
  Architecture:               arm64
  Container Runtime Version:  docker://19.3.2
  Kubelet Version:            v1.16.2-dirty
  Kube-Proxy Version:         v1.16.2-dirty
PodCIDR:                      10.244.1.0/24
PodCIDRs:                     10.244.1.0/24
Non-terminated Pods:          (2 in total)
  Namespace                   Name                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
  ---------                   ----                     ------------  ----------  ---------------  -------------  ---
  kube-system                 kube-flannel-ds-9xc6n    100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      4m21s
  kube-system                 kube-proxy-4dk7f         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m21s
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests   Limits
  --------           --------   ------
  cpu                100m (1%)  100m (1%)
  memory             50Mi (0%)  50Mi (0%)
  ephemeral-storage  0 (0%)     0 (0%)
Events:
  Type    Reason                   Age    From     Message
  ----    ------                   ----   ----     -------
  Normal  Starting                 4m22s  kubelet  Starting kubelet.
  Normal  NodeHasSufficientMemory  4m21s  kubelet  Node robot-dd9f6aaa status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    4m21s  kubelet  Node robot-dd9f6aaa status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientPID     4m21s  kubelet  Node robot-dd9f6aaa status is now: NodeHasSufficientPID
  Normal  Starting                 4m10s  kubelet  Starting kubelet.
  Normal  NodeHasSufficientMemory  4m10s  kubelet  Node robot-dd9f6aaa status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    4m10s  kubelet  Node robot-dd9f6aaa status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientPID     4m10s  kubelet  Node robot-dd9f6aaa status is now: NodeHasSufficientPID
  Normal  Starting                 3m59s  kubelet  Starting kubelet.
  Normal  NodeHasSufficientMemory  3m59s  kubelet  Node robot-dd9f6aaa status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    3m59s  kubelet  Node robot-dd9f6aaa status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientPID     3m59s  kubelet  Node robot-dd9f6aaa status is now: NodeHasSufficientPID
  Normal  Starting                 3m48s  kubelet  Starting kubelet.
  Normal  NodeHasSufficientMemory  3m48s  kubelet  Node robot-dd9f6aaa status is now: NodeHasSufficientMemory
  Normal  NodeHasSufficientPID     3m48s  kubelet  Node robot-dd9f6aaa status is now: NodeHasSufficientPID
  Normal  NodeHasNoDiskPressure    3m48s  kubelet  Node robot-dd9f6aaa status is now: NodeHasNoDiskPressure
  Normal  Starting                 3m37s  kubelet  Starting kubelet.
  Normal  NodeHasSufficientMemory  3m36s  kubelet  Node robot-dd9f6aaa status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    3m36s  kubelet  Node robot-dd9f6aaa status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientPID     3m36s  kubelet  Node robot-dd9f6aaa status is now: NodeHasSufficientPID
  Normal  Starting                 3m25s  kubelet  Starting kubelet.
  Normal  Starting                 3m14s  kubelet  Starting kubelet.
  Normal  NodeHasSufficientMemory  3m3s   kubelet  Node robot-dd9f6aaa status is now: NodeHasSufficientMemory
  Normal  Starting                 3m3s   kubelet  Starting kubelet.
  Normal  Starting                 2m52s  kubelet  Starting kubelet.
  Normal  Starting                 2m40s  kubelet  Starting kubelet.
  Normal  Starting                 2m29s  kubelet  Starting kubelet.
  Normal  NodeHasSufficientMemory  2m29s  kubelet  Node robot-dd9f6aaa status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    2m29s  kubelet  Node robot-dd9f6aaa status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientPID     2m29s  kubelet  Node robot-dd9f6aaa status is now: NodeHasSufficientPID
  Normal  NodeHasSufficientMemory  2m18s  kubelet  Node robot-dd9f6aaa status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    2m18s  kubelet  Node robot-dd9f6aaa status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientPID     2m18s  kubelet  Node robot-dd9f6aaa status is now: NodeHasSufficientPID
  Normal  Starting                 2m18s  kubelet  Starting kubelet.
  Normal  Starting                 2m7s   kubelet  Starting kubelet.
  Normal  Starting                 115s   kubelet  Starting kubelet.
  Normal  NodeHasNoDiskPressure    104s   kubelet  Node robot-dd9f6aaa status is now: NodeHasNoDiskPressure
  Normal  Starting                 104s   kubelet  Starting kubelet.
  Normal  NodeHasSufficientMemory  104s   kubelet  Node robot-dd9f6aaa status is now: NodeHasSufficientMemory
  Normal  Starting                 93s    kubelet  Starting kubelet.
  Normal  NodeHasSufficientMemory  93s    kubelet  Node robot-dd9f6aaa status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    93s    kubelet  Node robot-dd9f6aaa status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientPID     93s    kubelet  Node robot-dd9f6aaa status is now: NodeHasSufficientPID
  Normal  Starting                 82s    kubelet  Starting kubelet.
  Normal  NodeHasSufficientMemory  82s    kubelet  Node robot-dd9f6aaa status is now: NodeHasSufficientMemory
  Normal  Starting                 71s    kubelet  Starting kubelet.
  Normal  NodeHasSufficientMemory  70s    kubelet  Node robot-dd9f6aaa status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    70s    kubelet  Node robot-dd9f6aaa status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientPID     70s    kubelet  Node robot-dd9f6aaa status is now: NodeHasSufficientPID
  Normal  Starting                 59s    kubelet  Starting kubelet.
  Normal  Starting                 48s    kubelet  Starting kubelet.
  Normal  Starting                 37s    kubelet  Starting kubelet.
  Normal  Starting                 26s    kubelet  Starting kubelet.
  Normal  NodeHasSufficientMemory  25s    kubelet  Node robot-dd9f6aaa status is now: NodeHasSufficientMemory
  Normal  Starting                 15s    kubelet  Starting kubelet.
  Normal  NodeHasSufficientMemory  14s    kubelet  Node robot-dd9f6aaa status is now: NodeHasSufficientMemory
  Normal  Starting                 3s     kubelet  Starting kubelet.
  Normal  NodeHasSufficientMemory  3s     kubelet  Node robot-dd9f6aaa status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    3s     kubelet  Node robot-dd9f6aaa status is now: NodeHasNoDiskPressure
  1. 删除 /etc/docker/daemon.json 文件并重新启动
  2. 在 /opt/cni/bin 目录中安装 CNI 插件二进制文件 https://github.com/containernetworking/plugins/releases/download/v0.8.7/cni-plugins-linux-arm64-v0.8.7.tgz

第 1 步:kubectl get mutatingwebhookconfigurations -oyaml > mutating.txt

第二步:kubectl delete -f mutating.txt

第 3 步:重启节点

第 4 步:您应该能够看到节点已准备就绪

第 5 步:重新安装 mutatingwebhook 配置