在工作节点和控制平面之间引入代理后,Kubernetes 工作节点处于 NotReady 状态

Kubernetes Worker Node in Status NotReady after introducing a proxy in between worker node and control plane

我已经用 kubeamd 设置了一个 kubernetes 集群;一个控制平面和一个工作节点。

一切正常。然后 我在工作节点上设置了一个 Squid 代理,在 kubelet 配置中我设置了 http_proxy=http://127.0.0.1:3128 本质上要求 kubelet 使用代理与控制平面通信。

我看到,使用 tcpdump,网络数据包从 worker 节点登陆控制平面,我也可以从 worker 发出以下命令;

kubectl get no --server=https://10.128.0.63:6443
NAME        STATUS     ROLES    AGE    VERSION
k8-cp       Ready      master   6d6h   v1.17.0
k8-worker   NotReady   <none>   6d6h   v1.17.2

但是worker状态一直是NotReady。 我可能做错了什么?

我在这里使用 Flannel 进行网络连接。

P.S。在发布

之前,我也将 http_proxy=http://127.0.0.1:3128 导出为环境变量
kubectl get no --server=https://10.128.0.63:6443

来自工作节点。

如果重要的是节点状态;

kubectl  describe no k8-worker
Name:               k8-worker
Roles:              <none>
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=k8-worker
                    kubernetes.io/os=linux
Annotations:        flannel.alpha.coreos.com/backend-data: {"VtepMAC":"fe:04:d6:53:ef:cc"}
                    flannel.alpha.coreos.com/backend-type: vxlan
                    flannel.alpha.coreos.com/kube-subnet-manager: true
                    flannel.alpha.coreos.com/public-ip: 10.128.0.71
                    kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Wed, 29 Jan 2020 08:08:33 +0000
Taints:             node.kubernetes.io/unreachable:NoExecute
                    node.kubernetes.io/unreachable:NoSchedule
Unschedulable:      false
Lease:
  HolderIdentity:  k8-worker
  AcquireTime:     <unset>
  RenewTime:       Thu, 30 Jan 2020 11:51:24 +0000
Conditions:
  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
  ----             ------    -----------------                 ------------------                ------              -------
  MemoryPressure   Unknown   Thu, 30 Jan 2020 11:48:25 +0000   Thu, 30 Jan 2020 11:52:08 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
  DiskPressure     Unknown   Thu, 30 Jan 2020 11:48:25 +0000   Thu, 30 Jan 2020 11:52:08 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
  PIDPressure      Unknown   Thu, 30 Jan 2020 11:48:25 +0000   Thu, 30 Jan 2020 11:52:08 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
  Ready            Unknown   Thu, 30 Jan 2020 11:48:25 +0000   Thu, 30 Jan 2020 11:52:08 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
Addresses:
  InternalIP:  10.128.0.71
  Hostname:    k8-worker
Capacity:
  cpu:                2
  ephemeral-storage:  104844988Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             7493036Ki
  pods:               110
Allocatable:
  cpu:                2
  ephemeral-storage:  96625140781
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             7390636Ki
  pods:               110
System Info:
  Machine ID:                 3221f625fa75d20f08bceb4cacf74e20
  System UUID:                6DD87A9F-7F72-5326-5B84-1B3CBC4D9DBE
  Boot ID:                    7412bb51-869f-40de-8b37-dcbad6bf84b4
  Kernel Version:             3.10.0-1062.9.1.el7.x86_64
  OS Image:                   CentOS Linux 7 (Core)
  Operating System:           linux
  Architecture:               amd64
  Container Runtime Version:  docker://1.13.1
  Kubelet Version:            v1.17.2
  Kube-Proxy Version:         v1.17.2
PodCIDR:                      10.244.1.0/24
PodCIDRs:                     10.244.1.0/24
Non-terminated Pods:          (3 in total)
  Namespace                   Name                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
  ---------                   ----                           ------------  ----------  ---------------  -------------  ---
  default                     nginx-86c57db685-fvh28         0 (0%)        0 (0%)      0 (0%)           0 (0%)         6d20h
  kube-system                 kube-flannel-ds-amd64-b8vbr    100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      6d23h
  kube-system                 kube-proxy-rsr7l               0 (0%)        0 (0%)      0 (0%)           0 (0%)         6d23h
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests   Limits
  --------           --------   ------
  cpu                100m (5%)  100m (5%)
  memory             50Mi (0%)  50Mi (0%)
  ephemeral-storage  0 (0%)     0 (0%)
Events:              <none>

Link 到 kubelet 登录工人:

https://pastebin.com/E90FNEXR

Kube-controller-manager/node-controller负责监控kubelet暴露的端点"/healthz"节点的健康状况

到目前为止,您已经通过代理配置了一种单向通信(从 Node 到 Master)。

其他组件需要做,尤其是Kube-controller-manager。 这样您就可以通过 HTTP 代理启用双向通信。

This is achievable by specifying HTTP_PROXY on KUBEADM INIT:

$ sudo http_proxy=192.168.1.20:3128 kubeadm init

在此处了解更多信息: Kubadm Issue 182

  • 它创建一个一次性变量,由 kubeadm 读入,然后在控制平面的所有组件中重新创建,也作为环境变量。

你会看到这样的输出:

kubeadm@lab-1:~$ sudo http_proxy=192.168.1.20:3128 kubeadm init 
[init] Using Kubernetes version: v1.17.0
[preflight] Running pre-flight checks
        [WARNING HTTPProxy]: Connection to "https://10.156.0.6" uses proxy "http://192.168.1.20:3128". If that is not intended, adjust your proxy settings
        [WARNING HTTPProxyCIDR]: connection to "10.96.0.0/12" uses proxy "http://192.168.1.20:3128". This may lead to malfunctional cluster setup. Make sure that Pod and Services IP ranges specified correctly as exceptions in proxy configuration
  • 您可以选择通过 Env 变量手动执行此操作,就像您通过调整 kube-controller-manager 的 pod 规范对 kubelet 所做的那样。

在此处了解更多信息:Kubeadm Issue 324