Kubernetes 集群 "cni config uninitialized"
Kubernetes cluster "cni config uninitialized"
我 运行 遇到的问题与其他现有问题 post 非常相似,只是它们都有相同的解决方案,因此我创建了一个新线程。
问题:
安装Flannel后Master节点还是"NotReady"状态
预期结果:
安装 Flannel 后主节点变为 "Ready"。
背景:
我在安装 Flannel
时遵循 this 指南
我担心的是我默认使用 Kubelet v1.17.2,它就像上个月一样刚刚问世(谁能确认 v1.17.2 是否适用于 Flannel?
这是在主节点上 运行 命令后的输出:kubectl describe node machias
Name: machias
Roles: master
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=machias
kubernetes.io/os=linux
node-role.kubernetes.io/master=
Annotations: flannel.alpha.coreos.com/backend-data: {"VtepMAC":"be:78:65:7f:ae:6d"}
flannel.alpha.coreos.com/backend-type: vxlan
flannel.alpha.coreos.com/kube-subnet-manager: true
flannel.alpha.coreos.com/public-ip: 192.168.122.172
kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Sat, 15 Feb 2020 01:00:01 -0500
Taints: node.kubernetes.io/not-ready:NoExecute
node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoSchedule
Unschedulable: false
Lease:
HolderIdentity: machias
AcquireTime: <unset>
RenewTime: Sat, 15 Feb 2020 13:54:56 -0500
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Sat, 15 Feb 2020 13:54:52 -0500 Sat, 15 Feb 2020 00:59:54 -0500 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Sat, 15 Feb 2020 13:54:52 -0500 Sat, 15 Feb 2020 00:59:54 -0500 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Sat, 15 Feb 2020 13:54:52 -0500 Sat, 15 Feb 2020 00:59:54 -0500 KubeletHasSufficientPID kubelet has sufficient PID available
Ready False Sat, 15 Feb 2020 13:54:52 -0500 Sat, 15 Feb 2020 00:59:54 -0500 KubeletNotReady runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Addresses:
InternalIP: 192.168.122.172
Hostname: machias
Capacity:
cpu: 2
ephemeral-storage: 38583284Ki
hugepages-2Mi: 0
memory: 4030364Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 35558354476
hugepages-2Mi: 0
memory: 3927964Ki
pods: 110
System Info:
Machine ID: 20cbe0d737dd43588f4a2bccd70681a2
System UUID: ee9bc138-edee-471a-8ecc-f1c567c5f796
Boot ID: 0ba49907-ec32-4e80-bc4c-182fccb0b025
Kernel Version: 5.3.5-200.fc30.x86_64
OS Image: Fedora 30 (Workstation Edition)
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://19.3.5
Kubelet Version: v1.17.2
Kube-Proxy Version: v1.17.2
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (6 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system etcd-machias 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12h
kube-system kube-apiserver-machias 250m (12%) 0 (0%) 0 (0%) 0 (0%) 12h
kube-system kube-controller-manager-machias 200m (10%) 0 (0%) 0 (0%) 0 (0%) 12h
kube-system kube-flannel-ds-amd64-rrfht 100m (5%) 100m (5%) 50Mi (1%) 50Mi (1%) 12h
kube-system kube-proxy-z2q7d 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12h
kube-system kube-scheduler-machias 100m (5%) 0 (0%) 0 (0%) 0 (0%) 12h
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 650m (32%) 100m (5%)
memory 50Mi (1%) 50Mi (1%)
ephemeral-storage 0 (0%) 0 (0%)
Events: <none>
以及以下命令:kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-6955765f44-7nz46 0/1 Pending 0 12h
kube-system coredns-6955765f44-xk5r2 0/1 Pending 0 13h
kube-system etcd-machias.cs.unh.edu 1/1 Running 0 13h
kube-system kube-apiserver-machias.cs.unh.edu 1/1 Running 0 13h
kube-system kube-controller-manager-machias.cs.unh.edu 1/1 Running 0 13h
kube-system kube-flannel-ds-amd64-rrfht 1/1 Running 0 12h
kube-system kube-flannel-ds-amd64-t7p2p 1/1 Running 0 12h
kube-system kube-proxy-fnn78 1/1 Running 0 12h
kube-system kube-proxy-z2q7d 1/1 Running 0 13h
kube-system kube-scheduler-machias.cs.unh.edu 1/1 Running 0 13h
感谢您的帮助!
PodCIDR 值显示为 10.244.0.0/24
。为了使 flannel 正常工作,您必须将 --pod-network-cidr=10.244.0.0/16
传递给 kubeadm init。
我已经使用您正在使用的相同版本重现了您的场景,以确保这些版本适用于 Flannel。
经过测试我可以确定你使用的版本没有问题。
我按照以下步骤创建了它:
确保 iptables 工具不使用 nftables 后端 Source
update-alternatives --set iptables /usr/sbin/iptables-legacy
安装运行时间
sudo yum remove docker docker-common docker-selinux docker-engine
sudo yum install -y yum-utils device-mapper-persistent-data lvm2
sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
sudo yum install docker-ce-19.03.5-3.el7
sudo systemctl start docker
安装 kubeadm、kubelet 和 kubectl
sudo su -c "cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF"
sudo setenforce 0
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
sudo yum install -y kubelet-1.17.2-0 kubeadm-1.17.2-0 kubectl-1.17.2-0 --disableexcludes=kubernetes
sudo systemctl enable --now kubelet
注:
- 通过 运行ning
setenforce 0
和 sed ...
将 SELinux 设置为宽容模式可以有效地禁用它。这是允许容器访问主机文件系统所必需的,例如 pod 网络需要它。在 kubelet 中改进 SELinux 支持之前,您必须这样做。
RHEL/CentOS7 上的一些用户报告了由于绕过 iptables 而导致流量路由不正确的问题。您应该确保 net.bridge.bridge-nf-call-iptables
在您的 sysctl
配置中设置为 1,例如
cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system
确保在这一步之前加载了br_netfilter
模块。这可以通过 运行ning lsmod | grep br_netfilter
来完成。要显式加载它,请调用 modprobe br_netfilter
.
使用 Flannel CIDR 初始化集群
sudo kubeadm init --pod-network-cidr=10.244.0.0/16
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
添加 Flannel CNI
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/2140ac876ef134e0ed5af15c65e414cf26827915/Documentation/kube-flannel.yml
默认情况下,出于安全原因,您的集群不会在控制平面节点上安排 Pods。如果您希望能够在控制平面节点上安排 Pods,例如对于用于开发的单机 Kubernetes 集群,运行:
kubectl taint nodes --all node-role.kubernetes.io/master-
可以看到,我的master节点就绪了。请遵循此操作方法,让我知道您是否可以达到您想要的状态。
$ kubectl describe nodes
Name: kubeadm-fedora
Roles: master
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=kubeadm-fedora
kubernetes.io/os=linux
node-role.kubernetes.io/master=
Annotations: flannel.alpha.coreos.com/backend-data: {"VtepMAC":"8e:7e:bf:d9:21:1e"}
flannel.alpha.coreos.com/backend-type: vxlan
flannel.alpha.coreos.com/kube-subnet-manager: true
flannel.alpha.coreos.com/public-ip: 10.128.15.200
kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Mon, 17 Feb 2020 11:31:59 +0000
Taints: node-role.kubernetes.io/master:NoSchedule
Unschedulable: false
Lease:
HolderIdentity: kubeadm-fedora
AcquireTime: <unset>
RenewTime: Mon, 17 Feb 2020 11:47:52 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Mon, 17 Feb 2020 11:47:37 +0000 Mon, 17 Feb 2020 11:31:51 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Mon, 17 Feb 2020 11:47:37 +0000 Mon, 17 Feb 2020 11:31:51 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Mon, 17 Feb 2020 11:47:37 +0000 Mon, 17 Feb 2020 11:31:51 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Mon, 17 Feb 2020 11:47:37 +0000 Mon, 17 Feb 2020 11:32:32 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 10.128.15.200
Hostname: kubeadm-fedora
Capacity:
cpu: 2
ephemeral-storage: 104844988Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 7493036Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 96625140781
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 7390636Ki
pods: 110
System Info:
Machine ID: 41689852cca44b659f007bb418a6fa9f
System UUID: 390D88CD-3D28-5657-8D0C-83AB1974C88A
Boot ID: bff1c808-788e-48b8-a789-4fee4e800554
Kernel Version: 3.10.0-1062.9.1.el7.x86_64
OS Image: CentOS Linux 7 (Core)
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://19.3.5
Kubelet Version: v1.17.2
Kube-Proxy Version: v1.17.2
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (8 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system coredns-6955765f44-d9fb4 100m (5%) 0 (0%) 70Mi (0%) 170Mi (2%) 15m
kube-system coredns-6955765f44-l7xrk 100m (5%) 0 (0%) 70Mi (0%) 170Mi (2%) 15m
kube-system etcd-kubeadm-fedora 0 (0%) 0 (0%) 0 (0%) 0 (0%) 15m
kube-system kube-apiserver-kubeadm-fedora 250m (12%) 0 (0%) 0 (0%) 0 (0%) 15m
kube-system kube-controller-manager-kubeadm-fedora 200m (10%) 0 (0%) 0 (0%) 0 (0%) 15m
kube-system kube-flannel-ds-amd64-v6m2w 100m (5%) 100m (5%) 50Mi (0%) 50Mi (0%) 15m
kube-system kube-proxy-d65kl 0 (0%) 0 (0%) 0 (0%) 0 (0%) 15m
kube-system kube-scheduler-kubeadm-fedora 100m (5%) 0 (0%) 0 (0%) 0 (0%) 15m
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 850m (42%) 100m (5%)
memory 190Mi (2%) 390Mi (5%)
ephemeral-storage 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal NodeHasSufficientMemory 16m (x6 over 16m) kubelet, kubeadm-fedora Node kubeadm-fedora status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 16m (x5 over 16m) kubelet, kubeadm-fedora Node kubeadm-fedora status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 16m (x5 over 16m) kubelet, kubeadm-fedora Node kubeadm-fedora status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 16m kubelet, kubeadm-fedora Updated Node Allocatable limit across pods
Normal Starting 15m kubelet, kubeadm-fedora Starting kubelet.
Normal NodeHasSufficientMemory 15m kubelet, kubeadm-fedora Node kubeadm-fedora status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 15m kubelet, kubeadm-fedora Node kubeadm-fedora status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 15m kubelet, kubeadm-fedora Node kubeadm-fedora status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 15m kubelet, kubeadm-fedora Updated Node Allocatable limit across pods
Normal Starting 15m kube-proxy, kubeadm-fedora Starting kube-proxy.
Normal NodeReady 15m kubelet, kubeadm-fedora Node kubeadm-fedora status is now: NodeReady
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
kubeadm-fedora Ready master 17m v1.17.2
$ kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-6955765f44-d9fb4 1/1 Running 0 17m
kube-system coredns-6955765f44-l7xrk 1/1 Running 0 17m
kube-system etcd-kubeadm-fedora 1/1 Running 0 17m
kube-system kube-apiserver-kubeadm-fedora 1/1 Running 0 17m
kube-system kube-controller-manager-kubeadm-fedora 1/1 Running 0 17m
kube-system kube-flannel-ds-amd64-v6m2w 1/1 Running 0 17m
kube-system kube-proxy-d65kl 1/1 Running 0 17m
kube-system kube-scheduler-kubeadm-fedora 1/1 Running 0 17m
我 运行 遇到的问题与其他现有问题 post 非常相似,只是它们都有相同的解决方案,因此我创建了一个新线程。
问题: 安装Flannel后Master节点还是"NotReady"状态
预期结果: 安装 Flannel 后主节点变为 "Ready"。
背景: 我在安装 Flannel
时遵循 this 指南我担心的是我默认使用 Kubelet v1.17.2,它就像上个月一样刚刚问世(谁能确认 v1.17.2 是否适用于 Flannel?
这是在主节点上 运行 命令后的输出:kubectl describe node machias
Name: machias
Roles: master
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=machias
kubernetes.io/os=linux
node-role.kubernetes.io/master=
Annotations: flannel.alpha.coreos.com/backend-data: {"VtepMAC":"be:78:65:7f:ae:6d"}
flannel.alpha.coreos.com/backend-type: vxlan
flannel.alpha.coreos.com/kube-subnet-manager: true
flannel.alpha.coreos.com/public-ip: 192.168.122.172
kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Sat, 15 Feb 2020 01:00:01 -0500
Taints: node.kubernetes.io/not-ready:NoExecute
node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoSchedule
Unschedulable: false
Lease:
HolderIdentity: machias
AcquireTime: <unset>
RenewTime: Sat, 15 Feb 2020 13:54:56 -0500
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Sat, 15 Feb 2020 13:54:52 -0500 Sat, 15 Feb 2020 00:59:54 -0500 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Sat, 15 Feb 2020 13:54:52 -0500 Sat, 15 Feb 2020 00:59:54 -0500 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Sat, 15 Feb 2020 13:54:52 -0500 Sat, 15 Feb 2020 00:59:54 -0500 KubeletHasSufficientPID kubelet has sufficient PID available
Ready False Sat, 15 Feb 2020 13:54:52 -0500 Sat, 15 Feb 2020 00:59:54 -0500 KubeletNotReady runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Addresses:
InternalIP: 192.168.122.172
Hostname: machias
Capacity:
cpu: 2
ephemeral-storage: 38583284Ki
hugepages-2Mi: 0
memory: 4030364Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 35558354476
hugepages-2Mi: 0
memory: 3927964Ki
pods: 110
System Info:
Machine ID: 20cbe0d737dd43588f4a2bccd70681a2
System UUID: ee9bc138-edee-471a-8ecc-f1c567c5f796
Boot ID: 0ba49907-ec32-4e80-bc4c-182fccb0b025
Kernel Version: 5.3.5-200.fc30.x86_64
OS Image: Fedora 30 (Workstation Edition)
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://19.3.5
Kubelet Version: v1.17.2
Kube-Proxy Version: v1.17.2
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (6 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system etcd-machias 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12h
kube-system kube-apiserver-machias 250m (12%) 0 (0%) 0 (0%) 0 (0%) 12h
kube-system kube-controller-manager-machias 200m (10%) 0 (0%) 0 (0%) 0 (0%) 12h
kube-system kube-flannel-ds-amd64-rrfht 100m (5%) 100m (5%) 50Mi (1%) 50Mi (1%) 12h
kube-system kube-proxy-z2q7d 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12h
kube-system kube-scheduler-machias 100m (5%) 0 (0%) 0 (0%) 0 (0%) 12h
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 650m (32%) 100m (5%)
memory 50Mi (1%) 50Mi (1%)
ephemeral-storage 0 (0%) 0 (0%)
Events: <none>
以及以下命令:kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-6955765f44-7nz46 0/1 Pending 0 12h
kube-system coredns-6955765f44-xk5r2 0/1 Pending 0 13h
kube-system etcd-machias.cs.unh.edu 1/1 Running 0 13h
kube-system kube-apiserver-machias.cs.unh.edu 1/1 Running 0 13h
kube-system kube-controller-manager-machias.cs.unh.edu 1/1 Running 0 13h
kube-system kube-flannel-ds-amd64-rrfht 1/1 Running 0 12h
kube-system kube-flannel-ds-amd64-t7p2p 1/1 Running 0 12h
kube-system kube-proxy-fnn78 1/1 Running 0 12h
kube-system kube-proxy-z2q7d 1/1 Running 0 13h
kube-system kube-scheduler-machias.cs.unh.edu 1/1 Running 0 13h
感谢您的帮助!
PodCIDR 值显示为 10.244.0.0/24
。为了使 flannel 正常工作,您必须将 --pod-network-cidr=10.244.0.0/16
传递给 kubeadm init。
我已经使用您正在使用的相同版本重现了您的场景,以确保这些版本适用于 Flannel。
经过测试我可以确定你使用的版本没有问题。
我按照以下步骤创建了它:
确保 iptables 工具不使用 nftables 后端 Source
update-alternatives --set iptables /usr/sbin/iptables-legacy
安装运行时间
sudo yum remove docker docker-common docker-selinux docker-engine
sudo yum install -y yum-utils device-mapper-persistent-data lvm2
sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
sudo yum install docker-ce-19.03.5-3.el7
sudo systemctl start docker
安装 kubeadm、kubelet 和 kubectl
sudo su -c "cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF"
sudo setenforce 0
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
sudo yum install -y kubelet-1.17.2-0 kubeadm-1.17.2-0 kubectl-1.17.2-0 --disableexcludes=kubernetes
sudo systemctl enable --now kubelet
注:
- 通过 运行ning
setenforce 0
和sed ...
将 SELinux 设置为宽容模式可以有效地禁用它。这是允许容器访问主机文件系统所必需的,例如 pod 网络需要它。在 kubelet 中改进 SELinux 支持之前,您必须这样做。 RHEL/CentOS7 上的一些用户报告了由于绕过 iptables 而导致流量路由不正确的问题。您应该确保
net.bridge.bridge-nf-call-iptables
在您的sysctl
配置中设置为 1,例如cat <<EOF > /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF sysctl --system
确保在这一步之前加载了
br_netfilter
模块。这可以通过 运行ninglsmod | grep br_netfilter
来完成。要显式加载它,请调用modprobe br_netfilter
.
使用 Flannel CIDR 初始化集群
sudo kubeadm init --pod-network-cidr=10.244.0.0/16
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
添加 Flannel CNI
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/2140ac876ef134e0ed5af15c65e414cf26827915/Documentation/kube-flannel.yml
默认情况下,出于安全原因,您的集群不会在控制平面节点上安排 Pods。如果您希望能够在控制平面节点上安排 Pods,例如对于用于开发的单机 Kubernetes 集群,运行:
kubectl taint nodes --all node-role.kubernetes.io/master-
可以看到,我的master节点就绪了。请遵循此操作方法,让我知道您是否可以达到您想要的状态。
$ kubectl describe nodes
Name: kubeadm-fedora
Roles: master
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=kubeadm-fedora
kubernetes.io/os=linux
node-role.kubernetes.io/master=
Annotations: flannel.alpha.coreos.com/backend-data: {"VtepMAC":"8e:7e:bf:d9:21:1e"}
flannel.alpha.coreos.com/backend-type: vxlan
flannel.alpha.coreos.com/kube-subnet-manager: true
flannel.alpha.coreos.com/public-ip: 10.128.15.200
kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Mon, 17 Feb 2020 11:31:59 +0000
Taints: node-role.kubernetes.io/master:NoSchedule
Unschedulable: false
Lease:
HolderIdentity: kubeadm-fedora
AcquireTime: <unset>
RenewTime: Mon, 17 Feb 2020 11:47:52 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Mon, 17 Feb 2020 11:47:37 +0000 Mon, 17 Feb 2020 11:31:51 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Mon, 17 Feb 2020 11:47:37 +0000 Mon, 17 Feb 2020 11:31:51 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Mon, 17 Feb 2020 11:47:37 +0000 Mon, 17 Feb 2020 11:31:51 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Mon, 17 Feb 2020 11:47:37 +0000 Mon, 17 Feb 2020 11:32:32 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 10.128.15.200
Hostname: kubeadm-fedora
Capacity:
cpu: 2
ephemeral-storage: 104844988Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 7493036Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 96625140781
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 7390636Ki
pods: 110
System Info:
Machine ID: 41689852cca44b659f007bb418a6fa9f
System UUID: 390D88CD-3D28-5657-8D0C-83AB1974C88A
Boot ID: bff1c808-788e-48b8-a789-4fee4e800554
Kernel Version: 3.10.0-1062.9.1.el7.x86_64
OS Image: CentOS Linux 7 (Core)
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://19.3.5
Kubelet Version: v1.17.2
Kube-Proxy Version: v1.17.2
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (8 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system coredns-6955765f44-d9fb4 100m (5%) 0 (0%) 70Mi (0%) 170Mi (2%) 15m
kube-system coredns-6955765f44-l7xrk 100m (5%) 0 (0%) 70Mi (0%) 170Mi (2%) 15m
kube-system etcd-kubeadm-fedora 0 (0%) 0 (0%) 0 (0%) 0 (0%) 15m
kube-system kube-apiserver-kubeadm-fedora 250m (12%) 0 (0%) 0 (0%) 0 (0%) 15m
kube-system kube-controller-manager-kubeadm-fedora 200m (10%) 0 (0%) 0 (0%) 0 (0%) 15m
kube-system kube-flannel-ds-amd64-v6m2w 100m (5%) 100m (5%) 50Mi (0%) 50Mi (0%) 15m
kube-system kube-proxy-d65kl 0 (0%) 0 (0%) 0 (0%) 0 (0%) 15m
kube-system kube-scheduler-kubeadm-fedora 100m (5%) 0 (0%) 0 (0%) 0 (0%) 15m
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 850m (42%) 100m (5%)
memory 190Mi (2%) 390Mi (5%)
ephemeral-storage 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal NodeHasSufficientMemory 16m (x6 over 16m) kubelet, kubeadm-fedora Node kubeadm-fedora status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 16m (x5 over 16m) kubelet, kubeadm-fedora Node kubeadm-fedora status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 16m (x5 over 16m) kubelet, kubeadm-fedora Node kubeadm-fedora status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 16m kubelet, kubeadm-fedora Updated Node Allocatable limit across pods
Normal Starting 15m kubelet, kubeadm-fedora Starting kubelet.
Normal NodeHasSufficientMemory 15m kubelet, kubeadm-fedora Node kubeadm-fedora status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 15m kubelet, kubeadm-fedora Node kubeadm-fedora status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 15m kubelet, kubeadm-fedora Node kubeadm-fedora status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 15m kubelet, kubeadm-fedora Updated Node Allocatable limit across pods
Normal Starting 15m kube-proxy, kubeadm-fedora Starting kube-proxy.
Normal NodeReady 15m kubelet, kubeadm-fedora Node kubeadm-fedora status is now: NodeReady
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
kubeadm-fedora Ready master 17m v1.17.2
$ kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-6955765f44-d9fb4 1/1 Running 0 17m
kube-system coredns-6955765f44-l7xrk 1/1 Running 0 17m
kube-system etcd-kubeadm-fedora 1/1 Running 0 17m
kube-system kube-apiserver-kubeadm-fedora 1/1 Running 0 17m
kube-system kube-controller-manager-kubeadm-fedora 1/1 Running 0 17m
kube-system kube-flannel-ds-amd64-v6m2w 1/1 Running 0 17m
kube-system kube-proxy-d65kl 1/1 Running 0 17m
kube-system kube-scheduler-kubeadm-fedora 1/1 Running 0 17m