在 OpenShift 3 cron 作业中使用 PV

Using a PV in an OpenShift 3 cron job

我已经能够为我的 OpenShift 3 项目成功创建一个 cron 作业。该项目是对现有 Linux 网络服务器的提升和转变。部分现有应用程序需要多个 cron 任务才能 运行。我现在正在查看的是对应用程序数据库的每日更新。作为 cron 作业执行的一部分,我想写入一个日志文件。已经为主应用程序定义了一个 PV/PVC,我打算使用它保存我的 cron 作业的日志,但似乎没有为 cron 作业提供对 PV 的访问。

我正在使用以下 inProgress.yml 来定义 cron 作业

apiVersion: batch/v1beta1
kind: CronJob
metadata:
  name: in-progress
spec:
  schedule: "*/5 * * * *"       
  concurrencyPolicy: "Replace"  
  startingDeadlineSeconds: 200  
  suspend: false                
  successfulJobsHistoryLimit: 3 
  failedJobsHistoryLimit: 1     
  jobTemplate:                  
    spec:
      template:
        metadata:
          labels:               
            parent: "cronjobInProgress"
        spec:
          containers:
          - name: in-progress
            image: <image name>
            command: ["php",  "inProgress.php"]
          restartPolicy: OnFailure 
          volumeMounts:
            - mountPath: /data-pv
              name: log-vol
      volumes:
        - name: log-vol
          persistentVolumeClaim:
            claimName: data-pv

我正在使用以下命令创建 cron 作业

oc create -f inProgress.yml

PHP Warning: fopen(/data-pv/logs/2022-04-27-app.log): failed to open stream: No such file or directory in /opt/app-root/src/errorHandler.php on line 75 WARNING: [2] mkdir(): Permission denied, line 80 in file /opt/app-root/src/errorLogger.php
WARNING: [2] fopen(/data-pv/logs/2022-04-27-inprogress.log): failed to open stream: No such file or directory, line 60 in file /opt/app-root/src/errorLogger.php

查看执行的 pod 的 yml,没有提及 data-pv - 看起来好像是秘密 volumeMount ,已由 OpenShift 添加,正在删除任何进一步的 volumeMounts

 apiVersion: v1 kind: Pod metadata:   annotations:
     openshift.io/scc: restricted   creationTimestamp: '2022-04-27T13:25:04Z'   generateName: in-progress-1651065900- ...
 
       volumeMounts:
         - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
           name: default-token-n9jsw
           readOnly: true ...   volumes:
     - name: default-token-n9jsw
       secret:
         defaultMode: 420
         secretName: default-token-n9jsw

如何从 cron 作业中访问 PV?

您的清单不正确。 volumes 块需要是 spec.jobTemplate.spec.template.spec 的一部分,也就是说,它需要与 spec.jobTemplate.spec.template.spec.containers 缩进同一级别。在当前位置,它对 OpenShift 是不可见的。参见例如this pod example.

同样,volumeMountsrestartPolicycontainer 块的参数,需要相应地缩进。

apiVersion: batch/v1beta1
kind: CronJob
metadata:
  name: in-progress
spec:
  schedule: '*/5 * * * *'
  concurrencyPolicy: Replace
  startingDeadlineSeconds: 200
  suspend: false
  successfulJobsHistoryLimit: 3
  failedJobsHistoryLimit: 1
  jobTemplate:
    spec:
      template:
        metadata:
          labels:
            parent: cronjobInProgress
        spec:
          containers:
            - name: in-progress
              image: <image name>
              command:
                - php
                - inProgress.php
              restartPolicy: OnFailure
              volumeMounts:
                - mountPath: /data-pv
                  name: log-vol
          volumes:
            - name: log-vol
              persistentVolumeClaim:
                claimName: data-pv

感谢 larsks 提供的信息性回复。

当我复制您的清单建议时,OpenShift 显示以下内容

$ oc create -f InProgress.yml The CronJob "in-progress" is invalid: spec.jobTemplate.spec.template.spec.restartPolicy: Unsupported value: "Always": supported values: "OnFailure", " Never"

由于您的回答非常有帮助,我能够通过移动 restartPolicy: OnFailure 来解决此问题,因此最终清单如下。

apiVersion: batch/v1beta1
kind: CronJob
metadata:
  name: in-progress
spec:
  schedule: "*/5 * * * *"       
  concurrencyPolicy: "Replace"  
  startingDeadlineSeconds: 200  
  suspend: false                
  successfulJobsHistoryLimit: 3 
  failedJobsHistoryLimit: 1     
  jobTemplate:                  
    spec:
      template:
        metadata:
          labels:               
            parent: "cronjobInProgress"
        spec:
          restartPolicy: OnFailure 
          containers:
          - name: in-progress
            image: <image name>
            command: ["php",  "updateToInProgress.php"]
            volumeMounts:
              - mountPath: /data-pv
                name: log-vol
          volumes:
            - name: log-vol
              persistentVolumeClaim:
                claimName: data-pv