Kubernetes - 在 CentOS7 中实现 Kubernetes Master HA 方案

Kubernetes - Implementing Kubernetes Master HA solution in CentOS7

我正在为 CentOS7 环境中的 Kubernetes 主节点实施 HA 解决方案。

我的环境看起来像:

K8S_Master1 : 172.16.16.5
K8S_Master2 : 172.16.16.51
HAProxy     : 172.16.16.100
K8S_Minion1 : 172.16.16.50


etcd Version: 3.1.7
Kubernetes v1.5.2
CentOS Linux release 7.3.1611 (Core)

我的 etcd 集群已正确设置并处于工作状态。

[root@master1 ~]# etcdctl cluster-health
member 282a4a2998aa4eb0 is healthy: got healthy result from http://172.16.16.51:2379
member dd3979c28abe306f is healthy: got healthy result from http://172.16.16.5:2379
member df7b762ad1c40191 is healthy: got healthy result from http://172.16.16.50:2379

我的 Master1 K8S 配置是:

[root@master1 ~]# cat /etc/kubernetes/apiserver 
KUBE_API_ADDRESS="--address=0.0.0.0"
KUBE_ETCD_SERVERS="--etcd_servers=http://127.0.0.1:4001"
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.100.0.0/16"
KUBE_ADMISSION_CONTROL="--admission_control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"

[root@master1 ~]# cat /etc/kubernetes/config 
KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=0"
KUBE_ALLOW_PRIV="--allow_privileged=false"
KUBE_MASTER="--master=http://127.0.0.1:8080"

[root@master1 ~]# cat /etc/kubernetes/controller-manager 
KUBE_CONTROLLER_MANAGER_ARGS="--leader-elect"

[root@master1 ~]# cat /etc/kubernetes/scheduler 
KUBE_SCHEDULER_ARGS="--leader-elect"

至于Master2,我配置为:

[root@master2 kubernetes]# cat apiserver 
KUBE_API_ADDRESS="--address=0.0.0.0"
KUBE_ETCD_SERVERS="--etcd_servers=http://127.0.0.1:4001"
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.100.0.0/16"
KUBE_ADMISSION_CONTROL="--admission_control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"

[root@master2 kubernetes]# cat config 
KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=0"
KUBE_ALLOW_PRIV="--allow_privileged=false"
KUBE_MASTER="--master=http://127.0.0.1:8080"

[root@master2 kubernetes]# cat scheduler 
KUBE_SCHEDULER_ARGS=""

[root@master2 kubernetes]# cat controller-manager 
KUBE_CONTROLLER_MANAGER_ARGS=""

请注意,--leader-elect 仅在 Master1 上配置,因为我希望 Master1 成为领导者。

我的 HA 代理配置很简单:

frontend K8S-Master
    bind 172.16.16.100:8080
    default_backend K8S-Master-Nodes

backend K8S-Master-Nodes
    mode        http
    balance     roundrobin
    server      master1 172.16.16.5:8080 check
    server      master2 172.16.16.51:8080 check

现在我已指示我的 minion 连接到负载均衡器 IP,而不是直接连接到主 IP。

Minion 的配置是:

[root@minion kubernetes]# cat /etc/kubernetes/config 
KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=0"
KUBE_ALLOW_PRIV="--allow_privileged=false"
KUBE_MASTER="--master=http://172.16.16.100:8080"

在两个主节点上,我看到 minion/node 状态为 Ready

[root@master1 ~]# kubectl get nodes
NAME           STATUS    AGE
172.16.16.50   Ready     2h

[root@master2 ~]# kubectl get nodes
NAME           STATUS    AGE
172.16.16.50   Ready     2h

我设置了一个示例 nginx pod 使用:

apiVersion: v1
kind: ReplicationController
metadata:
  name: nginx
spec:
  replicas: 2
  selector:
    app: nginx
  template:
    metadata:
      name: nginx
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
        ports:
        - containerPort: 80

我在 Master1 上使用 :

创建了复制控制器
[root@master1 ~]# kubectl create -f nginx.yaml

并且在两个主节点上,我能够看到 pods 创建。

[root@master1 ~]# kubectl get po
NAME          READY     STATUS    RESTARTS   AGE
nginx-jwpxd   1/1       Running   0          29m
nginx-q613j   1/1       Running   0          29m

[root@master2 ~]# kubectl get po
NAME          READY     STATUS    RESTARTS   AGE
nginx-jwpxd   1/1       Running   0          29m
nginx-q613j   1/1       Running   0          29m

现在按逻辑思考,如果我要取消 Master1 节点并删除 Master2 上的 pods,Master2 应该重新创建 pods。所以这就是我所做的。

Master1 上:

[root@master1 ~]# systemctl stop kube-scheduler ; systemctl stop kube-apiserver ; systemctl stop kube-controller-manager

Master2 上:

[root@slave1 kubernetes]# kubectl delete po --all
pod "nginx-l7mvc" deleted
pod "nginx-r3m58" deleted

现在 Master2 应该创建 pods 因为 Replication Controller 仍在运行。但是新的 Pods 卡在了 :

[root@master2 kubernetes]# kubectl get po
NAME          READY     STATUS        RESTARTS   AGE
nginx-l7mvc   1/1       Terminating   0          13m
nginx-qv6z9   0/1       Pending       0          13m
nginx-r3m58   1/1       Terminating   0          13m
nginx-rplcz   0/1       Pending       0          13m

我已经等了很长时间了,但是 pods 卡在了这个状态。

但是当我在 Master1 重新启动服务时:

[root@master1 ~]# systemctl start kube-scheduler ; systemctl start kube-apiserver ; systemctl start kube-controller-manager

然后我在 Master1 上看到了进展:

NAME          READY     STATUS              RESTARTS   AGE
nginx-qv6z9   0/1       ContainerCreating   0          14m
nginx-rplcz   0/1       ContainerCreating   0          14m

[root@slave1 kubernetes]# kubectl get po
NAME          READY     STATUS    RESTARTS   AGE
nginx-qv6z9   1/1       Running   0          15m
nginx-rplcz   1/1       Running   0          15m

为什么 Master2 不重新创建 pods?这是我试图弄清楚的困惑。我已经走了很长一段路来设置一个功能齐全的 HA 设置,但似乎只有当我能弄清楚这个难题时才差不多完成了。

在我看来,错误是因为 Master2 没有启用 --leader-elect 标志。同一时间只能有schedulercontroller进程运行,这就是--leader-elect的原因。此标志的目的是让它们 "compete" 查看 schedulercontroller 进程中的哪个在给定时间处于活动状态。由于您没有在两个主节点中设置标志,因此有两个 schedulercontroller 进程处于活动状态,因此您遇到了冲突。为了解决这个问题,我建议您在所有主节点中启用此标志。

此外,根据k8s文档https://kubernetes.io/docs/tasks/administer-cluster/highly-available-master/#best-practices-for-replicating-masters-for-ha-clusters

Do not use a cluster with two master replicas. Consensus on a two replica cluster requires both replicas running when changing persistent state. As a result, both replicas are needed and a failure of any replica turns cluster into majority failure state. A two-replica cluster is thus inferior, in terms of HA, to a single replica cluster.