Kubernetes 反亲和性规则将 Deployment Pods 传播到至少 2 个节点

Kubernetes anti-affinity rule to spread Deployment Pods to at least 2 nodes

我在我的 k8s 部署中配置了以下反亲和性规则:

spec:
  ...
  selector:
    matchLabels:
      app: my-app
      environment: qa
  ...
  template:
    metadata:
      labels:
        app: my-app
        environment: qa
        version: v0
    spec:
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
              - key: app
                operator: In
                values:
                - my-app
            topologyKey: kubernetes.io/hostname

其中我说我不希望任何 Pod 副本被安排在我的 k8s 集群的节点上,其中已经存在相同应用程序的 Pod。因此,例如,具有:

nodes(a,b,c) = 3
replicas(1,2,3) = 3

replica_1 安排在 node_a, replica_2 安排在 node_breplica_3 安排在 node_c

因此,我将每个 Pod 安排在不同的节点中。

但是,我想知道是否有一种方法可以指定:“我想在至少 2 个节点中传播我的 Pods”以保证高可用性,而不会将所有 Pods 传播到其他节点,例如:

nodes(a,b,c) = 3
replicas(1,2,3) = 3

replica_1 安排在 node_a, replica_2 计划在 node_breplica_3 计划(再次)在 node_a

所以,总而言之,我想要一个更软的约束,它允许我保证将 Deployment 的副本分布在至少 2 个节点上的高可用性,而不必为特定应用程序的每个 Pod 启动一个节点。

谢谢!

我想我找到了解决您问题的办法。查看此示例 yaml 文件:

spec:
  topologySpreadConstraints:
  - maxSkew: 1
    topologyKey: kubernetes.io/hostname
    whenUnsatisfiable: DoNotSchedule
    labelSelector:
    matchLabels:
      example: app
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: kubernetes.io/hostname
            operator: In
            values:
            - worker-1
            - worker-2
      preferredDuringSchedulingIgnoredDuringExecution:
      - weight: 50
        preference:
          matchExpressions:
          - key: kubernetes.io/hostname
            operator: In
            values:
            - worker-1

这个配置的想法: 我在这里使用 nodeAffinity 来指示 pod 可以放置在哪些节点上:

- key: kubernetes.io/hostname

values:
- worker-1
- worker-2

设置以下行很重要:

- maxSkew: 1

根据 documentation:

maxSkew describes the degree to which Pods may be unevenly distributed. It must be greater than zero.

正因为如此,节点之间分配的提要数量的差异将始终最大等于 1。

本节:

      preferredDuringSchedulingIgnoredDuringExecution:
      - weight: 50
        preference:
          matchExpressions:
          - key: kubernetes.io/hostname
            operator: In
            values:
            - worker-1

是可选的,但是它可以让您更好地调整自由节点上的提要分布。 Here 您可以找到以下差异的描述:requiredDuringSchedulingIgnoredDuringExecutionpreferredDuringSchedulingIgnoredDuringExecution:

Thus an example of requiredDuringSchedulingIgnoredDuringExecution would be "only run the pod on nodes with Intel CPUs" and an example preferredDuringSchedulingIgnoredDuringExecution would be "try to run this set of pods in failure zone XYZ, but if it's not possible, then allow some to run elsewhere".