AWS EKS K8s 服务和 CronJob/Jon 相同的节点

AWS EKS K8s Service and CronJob/Jon same node

我有一个 k8s 部署,它包括一个 cron 作业(运行s 每小时)、服务(运行s http 服务)和一个存储 class(pvc 存储数据,使用 gp2).

我看到的问题是 gp2 只能读写一次。

我注意到当 cron 作业创建一个作业并且它与它可以正常挂载的服务位于同一节点上时。

我可以在服务、部署或 cron 作业 yaml 中做些什么来确保 cron 作业和服务始终落在同一个节点上吗?它可以是任何节点,但只要 cron 作业转到与服务相同的节点即可。

这在我的较低环境中不是问题,因为我们的节点很少,但在我们有更多节点的生产环境中,这是个问题。

简而言之,我想获得我的 cron 作业,它会创建一个作业,然后 pod 到 运行 与我的服务 pod 在同一节点上的 pod。

我知道这不是最佳实践,但我们的网络服务从 pvc 读取数据并提供服务。 cron 作业从其他来源提取新数据并将其留给网络服务器。

对其他想法/方式感到高兴。

谢谢

只关注部分:

How can I schedule a workload (Pod, Job, Cronjob) on a specific set of Nodes

您可以通过以下方式生成您的 Cronjob/Job

  • nodeSelector
  • nodeAffinity

nodeSelector

nodeSelector is the simplest recommended form of node selection constraint. nodeSelector is a field of PodSpec. It specifies a map of key-value pairs. For the pod to be eligible to run on a node, the node must have each of the indicated key-value pairs as labels (it can have additional labels as well). The most common usage is one key-value pair.

-- Kubernetes.io: Docs: Concepts: Scheduling eviction: Assign pod node: Node selector

示例如下(假设您的节点具有在 .spec.jobTemplate.spec.template.spec.nodeSelector 中引用的特定标签):

apiVersion: batch/v1beta1
kind: CronJob
metadata:
  name: hello
spec:
  schedule: "*/1 * * * *"
  jobTemplate:
    spec:
      template:
        spec:
          nodeSelector: # <-- IMPORTANT
            schedule: "here" # <-- IMPORTANT
          containers:
          - name: hello
            image: busybox
            imagePullPolicy: IfNotPresent
            command:
            - /bin/sh
            - -c
            - date; echo Hello from the Kubernetes cluster
          restartPolicy: OnFailure

运行 以上清单将在具有 schedule=here 标签的节点上安排您的 Pod (Cronjob):

  • $ kubectl get pods -o wide
NAME                     READY   STATUS      RESTARTS   AGE     IP          NODE                                   NOMINATED NODE   READINESS GATES
hello-1616323740-mqdmq   0/1     Completed   0          2m33s   10.4.2.67   node-ffb5                              <none>           <none>
hello-1616323800-wv98r   0/1     Completed   0          93s     10.4.2.68   node-ffb5                              <none>           <none>
hello-1616323860-66vfj   0/1     Completed   0          32s     10.4.2.69   node-ffb5                              <none>           <none>

nodeAffinity

Node affinity is conceptually similar to nodeSelector -- it allows you to constrain which nodes your pod is eligible to be scheduled on, based on labels on the node.

There are currently two types of node affinity, called requiredDuringSchedulingIgnoredDuringExecution and preferredDuringSchedulingIgnoredDuringExecution. You can think of them as "hard" and "soft" respectively, in the sense that the former specifies rules that must be met for a pod to be scheduled onto a node (just like nodeSelector but using a more expressive syntax), while the latter specifies preferences that the scheduler will try to enforce but will not guarantee.

-- Kubernetes.io: Docs: Concepts: Scheduling eviction: Assign pod node: Node affinity

示例如下(假设您的节点具有在 .spec.jobTemplate.spec.template.spec.nodeSelector 中引用的特定标签):

apiVersion: batch/v1beta1
kind: CronJob
metadata:
  name: hello
spec:
  schedule: "*/1 * * * *"
  jobTemplate:
    spec:
      template:
        spec:
          # --- nodeAffinity part
          affinity:
            nodeAffinity:
              requiredDuringSchedulingIgnoredDuringExecution:
                nodeSelectorTerms:
                - matchExpressions:
                  - key: schedule
                    operator: In
                    values:
                    - here
          # --- nodeAffinity part
          containers:
          - name: hello
            image: busybox
            imagePullPolicy: IfNotPresent
            command:
            - /bin/sh
            - -c
            - date; echo Hello from the Kubernetes cluster
          restartPolicy: OnFailure
  • $ kubectl get pods
NAME                     READY   STATUS      RESTARTS   AGE     IP           NODE                                   NOMINATED NODE   READINESS GATES
hello-1616325840-5zkbk   0/1     Completed   0          2m14s   10.4.2.102   node-ffb5                              <none>           <none>
hello-1616325900-lwndf   0/1     Completed   0          74s     10.4.2.103   node-ffb5                              <none>           <none>
hello-1616325960-j9kz9   0/1     Completed   0          14s     10.4.2.104   node-ffb5                              <none>           <none>

其他资源:

我想你也可以看看这个 Whosebug 答案: