[k8s]我尝试使用 podAntiAffinity 将一个 pod 分配给普通 pod,并将其他 pod 分配给 spot 节点
[k8s]I try to assign one pod to the normal pod,and the others on spot node by using podAntiAffinity
我有 6 个节点,它们都有标签 "group:emp",其中 4 个有标签 "iKind:spot",其中 2 个有标签 "ikind:normal"。
我使用deployment yaml将一个pod分配给普通pod,其他在spot节点上,但是没用。
我开始把pod的数量从1增加到6,但是到了2,所有的pod都分配到第一个spot节点上了
kind: Deployment
apiVersion: apps/v1
metadata:
name: pod-test
namespace: emp
labels:
app: pod-test
spec:
replicas: 2
selector:
matchLabels:
app: pod-test
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
template:
metadata:
labels:
app: pod-test
spec:
containers:
- name: pod-test
image: k8s.gcr.io/busybox
args: ["sh","-c","sleep 60000"]
imagePullPolicy: Always
resources:
requests:
cpu: 10m
memory: 100Mi
limits:
cpu: 100m
memory: 200Mi
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: group
operator: In
values:
- emp
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 70
preference:
matchExpressions:
- key: ikind
operator: In
values:
- spot
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- pod-test
topologyKey: ikind
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- pod-test
topologyKey: "kubernetes.io/hostname"
restartPolicy: Always
terminationGracePeriodSeconds: 10
dnsPolicy: ClusterFirst
schedulerName: default-scheduler
```
如果您想在所有节点上部署 pods,那么您必须更改 preferredDuringSchedulingIgnoredDuringExecution。
改变
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 70
preference:
matchExpressions:
- key: ikind
operator: In
values:
- spot
到
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 70
preference:
matchExpressions:
- key: ikind
operator: In
values:
- spot
- normal
现在将部署在两个节点上,ikind:spot
和ikind:normal
,之前只是spot。
我已经在 3 个 gke 节点上对其进行了测试,一切似乎都运行良好。
pod-test-54dc97fbcb-9hvvm 1/1 Running gke-cluster-1-default-pool-1ffaf1b8-gmhb <none> <none>
pod-test-54dc97fbcb-k2hv2 1/1 Running gke-cluster-1-default-pool-1ffaf1b8-gmhb <none> <none>
pod-test-54dc97fbcb-nqd97 1/1 Running gke-cluster-1-default-pool-1ffaf1b8-7c25 <none> <none>
pod-test-54dc97fbcb-zq9df 1/1 Running gke-cluster-1-default-pool-1ffaf1b8-jk6t <none> <none>
pod-test-54dc97fbcb-zvwhk 1/1 Running gke-cluster-1-default-pool-1ffaf1b8-7c25 <none> <none>
描述的很好here
apiVersion: v1
kind: Pod
metadata:
name: with-node-affinity
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/e2e-az-name
operator: In
values:
- e2e-az1
- e2e-az2
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
preference:
matchExpressions:
- key: another-node-label-key
operator: In
values:
- another-node-label-value
containers:
- name: with-node-affinity
image: k8s.gcr.io/pause:2.0
This node affinity rule says the pod can only be placed on a node with a label whose key is kubernetes.io/e2e-az-name and whose value is either e2e-az1 or e2e-az2. In addition, among nodes that meet that criteria, nodes with a label whose key is another-node-label-key and whose value is another-node-label-value should be preferred.
我添加节点 prefer matchExpressions to normal 并赋予权重 30,它起作用了。
为了避免节点数的影响,我改变了法线和斑点的权重。
当replicas为1时,普通节点有1个pod
当replicas为2时,普通节点有1个pod,spot节点有1个pod
当replicas为3时,普通节点有2个pod,spot节点有1个pod
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 70
preference:
matchExpressions:
- key: ikind
operator: In
values:
- normal
- weight: 30
preference:
matchExpressions:
- key: ikind
operator: In
values:
- spot
我有 6 个节点,它们都有标签 "group:emp",其中 4 个有标签 "iKind:spot",其中 2 个有标签 "ikind:normal"。
我使用deployment yaml将一个pod分配给普通pod,其他在spot节点上,但是没用。
我开始把pod的数量从1增加到6,但是到了2,所有的pod都分配到第一个spot节点上了
kind: Deployment
apiVersion: apps/v1
metadata:
name: pod-test
namespace: emp
labels:
app: pod-test
spec:
replicas: 2
selector:
matchLabels:
app: pod-test
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
template:
metadata:
labels:
app: pod-test
spec:
containers:
- name: pod-test
image: k8s.gcr.io/busybox
args: ["sh","-c","sleep 60000"]
imagePullPolicy: Always
resources:
requests:
cpu: 10m
memory: 100Mi
limits:
cpu: 100m
memory: 200Mi
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: group
operator: In
values:
- emp
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 70
preference:
matchExpressions:
- key: ikind
operator: In
values:
- spot
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- pod-test
topologyKey: ikind
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- pod-test
topologyKey: "kubernetes.io/hostname"
restartPolicy: Always
terminationGracePeriodSeconds: 10
dnsPolicy: ClusterFirst
schedulerName: default-scheduler
```
如果您想在所有节点上部署 pods,那么您必须更改 preferredDuringSchedulingIgnoredDuringExecution。
改变
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 70
preference:
matchExpressions:
- key: ikind
operator: In
values:
- spot
到
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 70
preference:
matchExpressions:
- key: ikind
operator: In
values:
- spot
- normal
现在将部署在两个节点上,ikind:spot
和ikind:normal
,之前只是spot。
我已经在 3 个 gke 节点上对其进行了测试,一切似乎都运行良好。
pod-test-54dc97fbcb-9hvvm 1/1 Running gke-cluster-1-default-pool-1ffaf1b8-gmhb <none> <none>
pod-test-54dc97fbcb-k2hv2 1/1 Running gke-cluster-1-default-pool-1ffaf1b8-gmhb <none> <none>
pod-test-54dc97fbcb-nqd97 1/1 Running gke-cluster-1-default-pool-1ffaf1b8-7c25 <none> <none>
pod-test-54dc97fbcb-zq9df 1/1 Running gke-cluster-1-default-pool-1ffaf1b8-jk6t <none> <none>
pod-test-54dc97fbcb-zvwhk 1/1 Running gke-cluster-1-default-pool-1ffaf1b8-7c25 <none> <none>
描述的很好here
apiVersion: v1
kind: Pod
metadata:
name: with-node-affinity
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/e2e-az-name
operator: In
values:
- e2e-az1
- e2e-az2
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
preference:
matchExpressions:
- key: another-node-label-key
operator: In
values:
- another-node-label-value
containers:
- name: with-node-affinity
image: k8s.gcr.io/pause:2.0
This node affinity rule says the pod can only be placed on a node with a label whose key is kubernetes.io/e2e-az-name and whose value is either e2e-az1 or e2e-az2. In addition, among nodes that meet that criteria, nodes with a label whose key is another-node-label-key and whose value is another-node-label-value should be preferred.
我添加节点 prefer matchExpressions to normal 并赋予权重 30,它起作用了。 为了避免节点数的影响,我改变了法线和斑点的权重。
当replicas为1时,普通节点有1个pod
当replicas为2时,普通节点有1个pod,spot节点有1个pod
当replicas为3时,普通节点有2个pod,spot节点有1个pod
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 70
preference:
matchExpressions:
- key: ikind
operator: In
values:
- normal
- weight: 30
preference:
matchExpressions:
- key: ikind
operator: In
values:
- spot