只让一个 elasticsearch pod 出现在 Kubernetes 的一个节点上

Letting only one elasticsearch pod come up on a node in Kubernetes

我们的产品有一个多节点设置,我们需要在其中部署多个 Elasticsearch pods。由于所有这些都是数据节点并且具有用于持久存储的卷装载,我们不想在同一个节点上启动两个 pods。我正在尝试使用 Kubernetes 的反亲和性功能,但无济于事。

集群部署是通过Rancher完成的。我们在集群中有 5 个节点,其中三个节点(假设 node-1node-2 and node-3)具有标签 test.service.es-master: "true"。因此,当我部署 helm chart 并将其扩展到 3 时,Elasticsearch pods 启动并且在所有这三个节点上 运行。但是如果我将它缩放到 4,则第 4 个数据节点会出现在上述节点之一中。这是正确的行为吗?我的理解是,施加严格的反亲和性应该可以防止 pods 出现在同一节点上。我参考了多个博客和论坛(例如 this and this),他们建议的更改与我的类似。我附上了掌舵图的相关部分。

要求是,我们只需要在上面提到的标记有特定键值对的那些节点上启动ES,并且每个节点应该只包含一个pod。感谢任何反馈。

apiVersion: v1
kind: Service
metadata:
  creationTimestamp: null
  labels:
    test.service.es-master: "true"
  name: {{ .Values.service.name }}
  namespace: default
spec:
  clusterIP: None
  ports:
  ...
  selector:
    test.service.es-master: "true"
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    test.service.es-master: "true"
  name: {{ .Values.service.name }}
  namespace: default
spec:
  selector:
    matchLabels:
      test.service.es-master: "true"
  serviceName: {{ .Values.service.name }}
  affinity:
    podAntiAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
      - labelSelector:
          matchExpressions:
          - key: test.service.es-master
            operator: In
            values:
            - "true"
        topologyKey: kubernetes.io/hostname
  replicas: {{ .Values.replicaCount }}
  template:
    metadata:
      creationTimestamp: null
      labels:
        test.service.es-master: "true"
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: test.service.es-master
                operator: In
                values:
                  - "true"
              topologyKey: kubernetes.io/hostname
      securityContext:
             ...
      volumes:
        ...
      ...
status: {}

Update-1

根据评论和答案中的建议,我在 template.spec 中添加了反亲和性部分。但不幸的是,问题仍然存在。更新后的 yaml 如下所示:

apiVersion: v1
kind: Service
metadata:
  creationTimestamp: null
  labels:
    test.service.es-master: "true"
  name: {{ .Values.service.name }}
  namespace: default
spec:
  clusterIP: None
  ports:
  - name: {{ .Values.service.httpport | quote }}
    port: {{ .Values.service.httpport }}
    targetPort: {{ .Values.service.httpport }}
  - name: {{ .Values.service.tcpport | quote }}
    port: {{ .Values.service.tcpport }}
    targetPort: {{ .Values.service.tcpport }}
  selector:
    test.service.es-master: "true"
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    test.service.es-master: "true"
  name: {{ .Values.service.name }}
  namespace: default
spec:
  selector:
    matchLabels:
      test.service.es-master: "true"
  serviceName: {{ .Values.service.name }}
  replicas: {{ .Values.replicaCount }}
  template:
    metadata:
      creationTimestamp: null
      labels:
        test.service.es-master: "true"
    spec:
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
            matchExpressions:
            - key: test.service.es-master
              operator: In
              values:
              - "true"
            topologyKey: kubernetes.io/hostname
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: test.service.es-master
                operator: In
                values:
                  - "true"
              topologyKey: kubernetes.io/hostname
      securityContext:
             readOnlyRootFilesystem: false
      volumes:
       - name: elasticsearch-data-volume
         hostPath:
            path: /opt/ca/elasticsearch/data
      initContainers:
         - name: elasticsearch-data-volume
           image: busybox
           securityContext:
                  privileged: true
           command: ["sh", "-c", "chown -R 1010:1010 /var/data/elasticsearch/nodes"]
           volumeMounts:
              - name: elasticsearch-data-volume
                mountPath: /var/data/elasticsearch/nodes
      containers:
      - env:
        {{- range $key, $val := .Values.data }}
        - name: {{ $key }} 
          value: {{ $val | quote }}
        {{- end}}
        image: {{ .Values.image.registry }}/analytics/{{ .Values.image.repository }}:{{ .Values.image.tag }}
        name: {{ .Values.service.name }}
        ports:
        - containerPort: {{ .Values.service.httpport }}
        - containerPort: {{ .Values.service.tcpport }}
        volumeMounts:
              - name: elasticsearch-data-volume
                mountPath: /var/data/elasticsearch/nodes    
        resources:
          limits:
            memory: {{ .Values.resources.limits.memory }}
          requests:
            memory: {{ .Values.resources.requests.memory }}
        restartPolicy: Always
status: {}

正如 Egor 所建议的,您需要 podAntiAffinity:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-cache
spec:
  selector:
    matchLabels:
      app: store
  replicas: 3
  template:
    metadata:
      labels:
        app: store
    spec:
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
              - key: app
                operator: In
                values:
                - store
            topologyKey: "kubernetes.io/hostname"

来源:https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#always-co-located-in-the-same-node

因此,对于您当前的标签,它可能如下所示:

spec:
  affinity:
    nodeAffinity:
    # node affinity stuff here
    podAntiAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
      - labelSelector:
          matchExpressions:
          - key: "test.service.es-master"
            operator: In
            values:
            - "true"
        topologyKey: "kubernetes.io/hostname"

确保你把它放在你的 yaml 中的正确位置,否则它不会工作。

这适用于 Kubernetes 1.11.5:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      test.service.es-master: "true"
  template:
    metadata:
      labels:
        test.service.es-master: "true"
    spec:
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
              - key: test.service.es-master
                operator: In
                values:
                - "true"
            topologyKey: kubernetes.io/hostname
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: test.service.es-master
                operator: In
                values:
                  - "true"
      containers:
      - image: nginx:1.7.10
        name: nginx

我不知道为什么您为 pod 部署选择器标签选择了与节点选择器相同的 key/value。他们至少令人困惑...

首先,无论是在您的初始清单中还是在更新后的清单中,您都将 topologyKey 用于 nodeAffinity,这将在您尝试使用 kubectl create 部署这些清单时出错或 kubectl apply 因为 nodeAffinity Ref doc

没有名为 topologyKey 的 api 键

其次,您正在为您的 nodeAffinity 使用名为 test.service.es-master 的密钥,您确定您的 "node" 具有这些标签吗?请通过此命令确认 kubectl get nodes --show-labels

最后,Augmenting to @Laszlo answer and your @bitswazsky comment on it to simplify it, you can use below code:

这里我使用了一个名为 role 的节点标签(作为键)来标识节点,您可以通过执行此命令将其添加到现有集群的节点 kubectl label nodes <node-name> role=platform

selector:
    matchLabels:
      component: nginx
  template:
    metadata:
      labels:
        component: nginx
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: role
                operator: In
                values:
                - platform
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
              - key: component
                operator: In
                values:
                - nginx
            topologyKey: kubernetes.io/hostname