Kubernetes:经常获得 "Error adding network: no IP addresses available in network: cbr0"

Kubernetes: Frequently gets "Error adding network: no IP addresses available in network: cbr0"

我使用 kubeadm 在 Ubuntu 16.04 LTS 和 flannel 上设置了一个单节点 Kubernetes 集群。

大部分时间一切正常,但每隔几天,集群就会进入无法安排新 pods 的状态 - pods 卡在 "Pending" 状态,当我 kubectl describe pod 那些 pods 时,我会出现如下错误消息:

Events:
  FirstSeen LastSeen    Count   From                SubObjectPath   Type        Reason      Message
  --------- --------    -----   ----                -------------   --------    ------      -------
  2m        2m      1   {default-scheduler }                Normal      Scheduled   Successfully assigned dex-1939802596-zt1r3 to superserver-03
  1m        2s      21  {kubelet superserver-03}            Warning     FailedSync  Error syncing pod, skipping: failed to "SetupNetwork" for "somepod-1939802596-zt1r3_somenamespace" with SetupNetworkError: "Failed to setup network for pod \"somepod-1939802596-zt1r3_somenamespace(167f8345-faeb-11e6-94f3-0cc47a9a5cf2)\" using network plugins \"cni\": no IP addresses available in network: cbr0; Skipping pod"

我找到了 Whosebug question 以及他建议的解决方法。它确实有助于恢复(虽然需要几分钟),但过一会儿问题又回来了...

我也遇到过这个 open issue,也使用建议的解决方法解决了问题,但问题又回来了。另外,这不完全是我的情况,在找到解决方法后问题就解决了...:\

技术细节:

kubeadm version: version.Info{Major:"1", Minor:"6+", GitVersion:"v1.6.0-alpha.0.2074+a092d8e0f95f52", GitCommit:"a092d8e0f95f5200f7ae2cba45c75ab42da36537", GitTreeState:"clean", BuildDate:"2016-12-13T17:03:18Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}

Kubernetes Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.3", GitCommit:"029c3a408176b55c30846f0faedf56aae5992e9b", GitTreeState:"clean", BuildDate:"2017-02-15T06:34:56Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}

使用这些命令启动集群:

kubeadm init --pod-network-cidr 10.244.0.0/16 --api-advertise-addresses 192.168.1.200

kubectl taint nodes --all dedicated-

kubectl -n kube-system apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

一些可能相关的系统日志(我有很多):

Feb 23 11:07:49 server-03 kernel: [  155.480669] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
Feb 23 11:07:49 server-03 dockerd[1414]: time="2017-02-23T11:07:49.735590817+02:00" level=warning msg="Couldn't run auplink before unmount /var/lib/docker/aufs/mnt/89bb7abdb946d858e175d80d6e1d2fdce0262af8c7afa9c6ad9d776f1f5028c4-init: exec: \"auplink\": executable file not found in $PATH"
Feb 23 11:07:49 server-03 kernel: [  155.496599] aufs au_opts_verify:1597:dockerd[24704]: dirperm1 breaks the protection by the permission bits on the lower branch
Feb 23 11:07:49 server-03 systemd-udevd[29313]: Could not generate persistent MAC address for vethd4d85eac: No such file or directory
Feb 23 11:07:49 server-03 kubelet[1228]: E0223 11:07:49.756976    1228 cni.go:255] Error adding network: no IP addresses available in network: cbr0
Feb 23 11:07:49 server-03 kernel: [  155.514994] IPv6: eth0: IPv6 duplicate address fe80::835:deff:fe4f:c74d detected!
Feb 23 11:07:49 server-03 kernel: [  155.515380] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
Feb 23 11:07:49 server-03 kernel: [  155.515588] device vethd4d85eac entered promiscuous mode
Feb 23 11:07:49 server-03 kernel: [  155.515643] cni0: port 34(vethd4d85eac) entered forwarding state
Feb 23 11:07:49 server-03 kernel: [  155.515663] cni0: port 34(vethd4d85eac) entered forwarding state
Feb 23 11:07:49 server-03 kubelet[1228]: E0223 11:07:49.757001    1228 cni.go:209] Error while adding to cni network: no IP addresses available in network: cbr0
Feb 23 11:07:49 server-03 kubelet[1228]: E0223 11:07:49.757056    1228 docker_manager.go:2201] Failed to setup network for pod "somepod-752955044-58g59_somenamespace(5d6c28e1-f8dd-11e6-9843-0cc47a9a5cf2)" using network plugins "cni": no IP addresses available in network: cbr0; Skipping pod

非常感谢!

编辑:

我可以重现它。这似乎是 kubelet CIDR 中 IP 地址的耗尽。调查结果:

这么说,回家IP地址都用完了?如何解决?这些解决方法不可能是唯一的方法...

再次感谢。

编辑 (2)

另一个相关问题:https://github.com/containernetworking/cni/issues/306

目前,这是我找到的最佳解决方法:

https://github.com/kubernetes/kubernetes/issues/34278#issuecomment-254686727

我已经在@reboot 上为运行 这个脚本设置了一个 cron 作业。

似乎该问题已通过 temp fix 在 Docker 守护程序重新启动时收集 pods 垃圾来解决,但我的可能未启用该功能群集。

前几天刚刚合并了新的better long-term fix,所以希望这个问题能在下一个Kubernetes 1.6.0版本中修复。