当cronjobs设置为replace时,Kubernetes是不是要等到之前的job关闭完了再开始新的呢?

When cronjobs are set to replace, does Kubernetes wait for the previous job to finish shutting down before starting the new one?

正如问题所说,我只是想知道 Kubernetes 是否等待来自前一个 cronjob 的某种确认它已经完全停止,然后再启动新的 cronjob,或者是否发送了 kill 信号并且新工作同时开始。

作为参考,以下是关于替换政策的所有文档:

https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/#concurrency-policy

Concurrency Policy

  • Replace: If it is time for a new job run and the previous job run hasn’t finished yet, the cron job replaces the currently running job run with a new job run

短intro/definition

  • Cron jobs go back long in the history of UNIX and Linux. Combined with other Kubernetes technologies like Pods, containers, the scheduler, and the intelligent algorithms for Pod placement and health probes, CronJobs prove to be way more powerful than their traditional OS-level counterparts.
  • Since they run on containers, CronJobs provide a lot of flexibility for the developers. They need not worry about which platform the cron job runs on and the presence of the required dependencies as everything runs on the container.
  • Kubernetes handles CronJob execution, what happens when it misses an execution time, and how many times the job should run. This allows the developers to focus more on writing code and addressing business issues rather than worrying about the internals of code execution.
  • The business application is still responsible for handling what happens when the cron job runs, does not run, gets canceled, or runs concurrently.

在此处阅读更多内容:cronjob

CronJob has a concurrency policy set to Replace and it is still working, the Job will be deleted and it also deletes the Pod - look at the code.

时提到你的问题

当 Pod 处于 deleting the Linux container/s will be sent a SIGTERM and then a SIGKILL after a grace period, default set to 30 seconds. The terminationGracePeriodSeconds property in a PodSpec 时,可以设置为覆盖默认值。

查看 flag in code which is added to the DeleteJob function. This seems that this delete is only deleting values from the kube key/value store. This could mean the new Pod or Job could be created while the current Pod or Job is still terminating. You could confirm with a Job that doesn't respect a SIGTERM and has a terminationGracePeriodSeconds 设置为集群调度速度的几倍。

看看@Matt 的回答: