helm yaml iterate 导致 nil 指针
helm yaml iterate causes nil pointer
我正在尝试遍历 jobsContainer 数组以在我正在创建的 cronjob 中生成多个实例。
Ny values.yaml 如下所示:
jobContainers:
- cleaner1:
env:
keepRunning: false
logsPath: /nfs/data_etl/logs
- cleaner2:
env:
keepRunning: false
logsPath: /nfs/data_etl/logs
我的模板 cronJob.yaml 看起来像:
{{- range $job, $val := .Values.jobContainers }}
- image: "{{ $image.repository }}:{{ $image.tag }}"
imagePullPolicy: {{ $image.pullPolicy }}
name: {{ $job }}
env:
- name: KEEP_RUNNING
value: "{{ .env.keepRunning }}"
volumeMounts:
- name: {{ .logsPathName }}
mountPath: /log
restartPolicy: Never
{{- end }}
helm install returns出现如下错误:
executing "/templates/cronjob.yaml" at <.env.keepRunning>: nil pointer evaluating interface {}.keepRunning
我的cronjob.Yaml如下:
{{- $image := .Values.image }}
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: {{ .Release.Name }}
labels:
app: {{ .Release.Name }}
chart: ni-filecleaner
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
jobTemplate:
spec:
template:
metadata:
labels:
app: {{ .Release.Name }}
cron: {{ .Values.filesjob.jobName }}
spec:
containers:
{{- range $job, $val := .Values.jobContainers }}
- image: "{{ $image.repository }}:{{ $image.tag }}"
imagePullPolicy: {{ $image.pullPolicy }}
name: {{ $job }}
env:
- name: KEEP_RUNNING
value: "{{ $val.env.keepRunning }}"
- name: FILE_RETENTION_DAYS
value: "{{ .env.retentionPeriod }}"
- name: FILE_MASK
value: "{{ .env.fileMask }}"
- name: ID
value: "{{ .env.id }}"
volumeMounts:
- mountPath: /data
name: {{ .dataPathName }}
- name: {{ .logsPathName }}
mountPath: /log
restartPolicy: Never
volumes:
- name: {{ .dataPathName }}
nfs:
server: {{ .nfsIp }}
path: {{ .dataPath }}
- name: {{ .logsPathName }}
nfs:
server: {{ .nfsIp }}
path: {{ .logsPath }}
{{- end }}
schedule: "{{ .Values.filesjob.schedule }}"
concurrencyPolicy: Forbid
successfulJobsHistoryLimit: {{ .Values.filesjob.successfulJobsHistoryLimit }}
failedJobsHistoryLimit: {{ .Values.filesjob.failedJobsHistoryLimit }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 12 }}
{{- end }}
完整的values.yaml如下:
replicaCount: 1
image:
repository: app.corp/ni-etl-filecleaner
tag: "3.0.3.1"
pullPolicy: IfNotPresent
jobContainers:
- processed:
env:
keepRunning: false
fileMask: ne*.*
retentionPeriod: 3
id: processed
jobName: processed
dataPathName: path-to-clean-processed
logsPathName: path-logfiles-processed
dataPath: /nfs/data_etl/loader/processed
logsPath: /nfs/data_etl/logs
nfsIp: ngs.corp
- incoming:
env:
keepRunning: false
fileMask: ne*.*
retentionPeriod: 3
id: incoming
jobName: incoming
dataPathName: path-to-clean-incoming
logsPathName: path-logfiles-incoming
dataPath: /nfs/data_etl/loader/incoming
logsPath: /nfs/data_etl/logs
nfsIp: ngs.corp
resources: {}
nodeSelector: {}
tolerations: []
affinity: {}
您的值文件和模板文件似乎都存在一些缩进问题。这是正确的模板和值文件。
{{- $image := .Values.image }}
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: {{ .Release.Name }}
labels:
app: {{ .Release.Name }}
chart: ni-filecleaner
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
jobTemplate:
spec:
template:
metadata:
labels:
app: {{ .Release.Name }}
cron: {{ .Values.filesjob.jobName }}
spec:
containers:
{{- range $job, $val := .Values.jobContainers }}
- image: "{{ $image.repository }}:{{ $image.tag }}"
imagePullPolicy: {{ $image.pullPolicy }}
name: "{{ $job }}"
env:
- name: KEEP_RUNNING
value: "{{ $val.env.keepRunning }}"
- name: FILE_RETENTION_DAYS
value: "{{ $val.env.retentionPeriod }}"
- name: FILE_MASK
value: "{{ $val.env.fileMask }}"
- name: ID
value: "{{ $val.env.id }}"
volumeMounts:
- mountPath: /data
name: {{ $val.dataPathName }}
- name: {{ $val.logsPathName }}
mountPath: /log
restartPolicy: Never
volumes:
- name: {{ $val.dataPathName }}
nfs:
server: {{ $val.nfsIp }}
path: {{ $val.dataPath }}
- name: {{ $val.logsPathName }}
nfs:
server: {{ $val.nfsIp }}
path: {{ $val.logsPath }}
{{- end }}
schedule: "{{ .Values.filesjob.schedule }}"
concurrencyPolicy: Forbid
successfulJobsHistoryLimit: {{ .Values.filesjob.successfulJobsHistoryLimit }}
failedJobsHistoryLimit: {{ .Values.filesjob.failedJobsHistoryLimit }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 12 }}
{{- end }}
replicaCount: 1
image:
repository: app.corp/ni-etl-filecleaner
tag: "3.0.3.1"
pullPolicy: IfNotPresent
filesjob:
name: cleaner
jobContainers:
- processed:
env:
keepRunning: false
fileMask: ne*.*
retentionPeriod: 3
id: processed
jobName: processed
dataPathName: path-to-clean-processed
logsPathName: path-logfiles-processed
dataPath: /nfs/data_etl/loader/processed
logsPath: /nfs/data_etl/logs
nfsIp: ngs.corp
- incoming:
env:
keepRunning: false
fileMask: ne*.*
retentionPeriod: 3
id: incoming
jobName: incoming
dataPathName: path-to-clean-incoming
logsPathName: path-logfiles-incoming
dataPath: /nfs/data_etl/loader/incoming
logsPath: /nfs/data_etl/logs
nfsIp: ngs.corp
resources: {}
nodeSelector: {}
tolerations: []
affinity: {}
我正在尝试遍历 jobsContainer 数组以在我正在创建的 cronjob 中生成多个实例。 Ny values.yaml 如下所示:
jobContainers:
- cleaner1:
env:
keepRunning: false
logsPath: /nfs/data_etl/logs
- cleaner2:
env:
keepRunning: false
logsPath: /nfs/data_etl/logs
我的模板 cronJob.yaml 看起来像:
{{- range $job, $val := .Values.jobContainers }}
- image: "{{ $image.repository }}:{{ $image.tag }}"
imagePullPolicy: {{ $image.pullPolicy }}
name: {{ $job }}
env:
- name: KEEP_RUNNING
value: "{{ .env.keepRunning }}"
volumeMounts:
- name: {{ .logsPathName }}
mountPath: /log
restartPolicy: Never
{{- end }}
helm install returns出现如下错误:
executing "/templates/cronjob.yaml" at <.env.keepRunning>: nil pointer evaluating interface {}.keepRunning
我的cronjob.Yaml如下:
{{- $image := .Values.image }}
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: {{ .Release.Name }}
labels:
app: {{ .Release.Name }}
chart: ni-filecleaner
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
jobTemplate:
spec:
template:
metadata:
labels:
app: {{ .Release.Name }}
cron: {{ .Values.filesjob.jobName }}
spec:
containers:
{{- range $job, $val := .Values.jobContainers }}
- image: "{{ $image.repository }}:{{ $image.tag }}"
imagePullPolicy: {{ $image.pullPolicy }}
name: {{ $job }}
env:
- name: KEEP_RUNNING
value: "{{ $val.env.keepRunning }}"
- name: FILE_RETENTION_DAYS
value: "{{ .env.retentionPeriod }}"
- name: FILE_MASK
value: "{{ .env.fileMask }}"
- name: ID
value: "{{ .env.id }}"
volumeMounts:
- mountPath: /data
name: {{ .dataPathName }}
- name: {{ .logsPathName }}
mountPath: /log
restartPolicy: Never
volumes:
- name: {{ .dataPathName }}
nfs:
server: {{ .nfsIp }}
path: {{ .dataPath }}
- name: {{ .logsPathName }}
nfs:
server: {{ .nfsIp }}
path: {{ .logsPath }}
{{- end }}
schedule: "{{ .Values.filesjob.schedule }}"
concurrencyPolicy: Forbid
successfulJobsHistoryLimit: {{ .Values.filesjob.successfulJobsHistoryLimit }}
failedJobsHistoryLimit: {{ .Values.filesjob.failedJobsHistoryLimit }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 12 }}
{{- end }}
完整的values.yaml如下:
replicaCount: 1
image:
repository: app.corp/ni-etl-filecleaner
tag: "3.0.3.1"
pullPolicy: IfNotPresent
jobContainers:
- processed:
env:
keepRunning: false
fileMask: ne*.*
retentionPeriod: 3
id: processed
jobName: processed
dataPathName: path-to-clean-processed
logsPathName: path-logfiles-processed
dataPath: /nfs/data_etl/loader/processed
logsPath: /nfs/data_etl/logs
nfsIp: ngs.corp
- incoming:
env:
keepRunning: false
fileMask: ne*.*
retentionPeriod: 3
id: incoming
jobName: incoming
dataPathName: path-to-clean-incoming
logsPathName: path-logfiles-incoming
dataPath: /nfs/data_etl/loader/incoming
logsPath: /nfs/data_etl/logs
nfsIp: ngs.corp
resources: {}
nodeSelector: {}
tolerations: []
affinity: {}
您的值文件和模板文件似乎都存在一些缩进问题。这是正确的模板和值文件。
{{- $image := .Values.image }}
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: {{ .Release.Name }}
labels:
app: {{ .Release.Name }}
chart: ni-filecleaner
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
jobTemplate:
spec:
template:
metadata:
labels:
app: {{ .Release.Name }}
cron: {{ .Values.filesjob.jobName }}
spec:
containers:
{{- range $job, $val := .Values.jobContainers }}
- image: "{{ $image.repository }}:{{ $image.tag }}"
imagePullPolicy: {{ $image.pullPolicy }}
name: "{{ $job }}"
env:
- name: KEEP_RUNNING
value: "{{ $val.env.keepRunning }}"
- name: FILE_RETENTION_DAYS
value: "{{ $val.env.retentionPeriod }}"
- name: FILE_MASK
value: "{{ $val.env.fileMask }}"
- name: ID
value: "{{ $val.env.id }}"
volumeMounts:
- mountPath: /data
name: {{ $val.dataPathName }}
- name: {{ $val.logsPathName }}
mountPath: /log
restartPolicy: Never
volumes:
- name: {{ $val.dataPathName }}
nfs:
server: {{ $val.nfsIp }}
path: {{ $val.dataPath }}
- name: {{ $val.logsPathName }}
nfs:
server: {{ $val.nfsIp }}
path: {{ $val.logsPath }}
{{- end }}
schedule: "{{ .Values.filesjob.schedule }}"
concurrencyPolicy: Forbid
successfulJobsHistoryLimit: {{ .Values.filesjob.successfulJobsHistoryLimit }}
failedJobsHistoryLimit: {{ .Values.filesjob.failedJobsHistoryLimit }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 12 }}
{{- end }}
replicaCount: 1
image:
repository: app.corp/ni-etl-filecleaner
tag: "3.0.3.1"
pullPolicy: IfNotPresent
filesjob:
name: cleaner
jobContainers:
- processed:
env:
keepRunning: false
fileMask: ne*.*
retentionPeriod: 3
id: processed
jobName: processed
dataPathName: path-to-clean-processed
logsPathName: path-logfiles-processed
dataPath: /nfs/data_etl/loader/processed
logsPath: /nfs/data_etl/logs
nfsIp: ngs.corp
- incoming:
env:
keepRunning: false
fileMask: ne*.*
retentionPeriod: 3
id: incoming
jobName: incoming
dataPathName: path-to-clean-incoming
logsPathName: path-logfiles-incoming
dataPath: /nfs/data_etl/loader/incoming
logsPath: /nfs/data_etl/logs
nfsIp: ngs.corp
resources: {}
nodeSelector: {}
tolerations: []
affinity: {}