从 SCDF 执行任务时,数据库凭据作为部分作业参数公开
DB Credentials Exposed as part job parameters when executing a task from SCDF
我有 Custom Built SCDF,它在 Openshift 中构建为 docker 映像并在 server-deployment.yaml 中引用为 docker image.I 使用 Oracle 数据库存储任务元数据,这里是外部来源。我在 configmap 中传递了所有数据库属性。数据库密码是 base64 编码的,并作为秘密添加到配置映射中。 SCDF 正在使用这些数据库详细信息来存储任务元数据。
这些作业参数由 SCDF 传递给正在执行的 job.But 这些作业参数又是数据源属性,包括 configmap 中存在的数据库密码,作为作业参数打印在日志中,并且 batch_job_execution_params table。
我认为在 configmap 中使用密码作为秘密应该可以解决这个问题。但事实并非如此。下面是正在打印的作业参数的日志和 table 片段。
我想知道如何避免将这些数据库属性作为作业参数传递给正在执行的作业,以防止凭据被泄露?
12-06-2020 18:12:38.540 [main] INFO org.springframework.batch.core.launch.support.SimpleJobLauncher.run - Job:
[FlowJob: [name=Job]] launched with the following parameters: [{
-spring.cloud.task.executionid=8010,
-spring.cloud.data.flow.platformname=default,
-spring.datasource.username=ACTUAL_USERNAME,
-spring.cloud.task.name=Alljobs,
Job.ID=1591985558466,
-spring.datasource.password=ACTUAL_PASSWORD,
-spring.datasource.driverClassName=oracle.jdbc.OracleDriver,
-spring.datasource.url=DATASOURCE_URL,
-spring.batch.job.names=Job_1}]
为作业执行创建的 Pod - openshift 屏幕截图
数据库Table
Custom SCDF Dockerfile.yaml
===========================
FROM maven:3.5.2-jdk-8-alpine AS MAVEN_BUILD
COPY pom.xml /build/
COPY src /build/src/
WORKDIR /build/
RUN mvn package
FROM openjdk:8-jre-alpine
WORKDIR /app
COPY --from=MAVEN_BUILD /build/target/BatchAdmin-0.0.1-SNAPSHOT.jar /app/
ENTRYPOINT ["java", "-jar", "BatchAdmin-0.0.1-SNAPSHOT.jar"]
Deployment.yaml
===============
apiVersion: apps/v1
kind: Deployment
metadata:
name: scdf-server
labels:
app: scdf-server
spec:
selector:
matchLabels:
app: scdf-server
replicas: 1
template:
metadata:
labels:
app: scdf-server
spec:
containers:
- name: scdf-server
image: docker-registry.default.svc:5000/batchadmin/scdf-server #DockerImage
imagePullPolicy: Always
volumeMounts:
- name: config
mountPath: /config
readOnly: true
ports:
- containerPort: 80
livenessProbe:
httpGet:
path: /management/health
port: 80
initialDelaySeconds: 45
readinessProbe:
httpGet:
path: /management/info
port: 80
initialDelaySeconds: 45
resources:
limits:
cpu: 1.0
memory: 2048Mi
requests:
cpu: 0.5
memory: 1024Mi
env:
- name: KUBERNETES_NAMESPACE
valueFrom:
fieldRef:
fieldPath: "metadata.namespace"
- name: SERVER_PORT
value: '80'
- name: SPRING_CLOUD_CONFIG_ENABLED
value: 'false'
- name: SPRING_CLOUD_DATAFLOW_FEATURES_ANALYTICS_ENABLED
value: 'true'
- name: SPRING_CLOUD_DATAFLOW_FEATURES_SCHEDULES_ENABLED
value: 'true'
- name: SPRING_CLOUD_DATAFLOW_TASK_COMPOSED_TASK_RUNNER_URI
value: 'docker://springcloud/spring-cloud-dataflow-composed-task-runner:2.6.0.BUILD-SNAPSHOT'
- name: SPRING_CLOUD_KUBERNETES_CONFIG_ENABLE_API
value: 'true'
- name: SPRING_CLOUD_KUBERNETES_SECRETS_ENABLE_API
value: 'true'
- name: SPRING_CLOUD_KUBERNETES_SECRETS_PATHS
value: /etc/secrets
- name: SPRING_CLOUD_DATAFLOW_FEATURES_TASKS_ENABLED
value: 'true'
- name: SPRING_CLOUD_KUBERNETES_CONFIG_NAME
value: scdf-server
- name: SPRING_CLOUD_DATAFLOW_SERVER_URI
value: 'http://${SCDF_SERVER_SERVICE_HOST}:${SCDF_SERVER_SERVICE_PORT}'
# Add Maven repo for metadata artifact resolution for all stream apps
- name: SPRING_APPLICATION_JSON
value: "{ \"maven\": { \"local-repository\": null, \"remote-repositories\": { \"repo1\": { \"url\": \"https://repo.spring.io/libs-snapshot\"} } } }"
serviceAccountName: scdf-sa
volumes:
- name: config
configMap:
name: scdf-server
items:
- key: application.yaml
path: application.yaml
#- name: SPRING_CLOUD_DATAFLOW_FEATURES_TASKS_ENABLED
#value : 'true'
server-config.yaml
==================
apiVersion: v1
kind: ConfigMap
metadata:
name: scdf-server
labels:
app: scdf-server
data:
application.yaml: |-
spring:
cloud:
dataflow:
task:
platform:
kubernetes:
accounts:
default:
limits:
memory: 1024Mi
cpu: 2
entry-point-style: exec
image-pull-policy: always
datasource:
url: jdbc:oracle:thin:@db_url
username: BATCH_APP
password: ${oracle-root-password}
driver-class-name: oracle.jdbc.OracleDriver
testOnBorrow: true
validationQuery: "SELECT 1"
flyway:
enabled: false
jpa:
hibernate:
use-new-id-generator-mappings: true
oracle-secrets.yaml
===================
apiVersion: v1
kind: Secret
metadata:
name: oracle
labels:
app: oracle
data:
oracle-root-password: a2xldT3ederhgyzFCajE4YQ==
如有任何帮助,我们将不胜感激。谢谢
这不是一个完美的示例 - 但您可以使用 logback 的功能(Spring Boot 使用的默认日志记录库)屏蔽日志中的密码
将以下配置放入你的logback中,它会将密码替换为****
<springProfile name="local">
<include resource="org/springframework/boot/logging/logback/console-appender.xml"/>
<appender name="console" class="ch.qos.logback.core.ConsoleAppender">
<encoder>
<pattern>
%d{dd-MM-yyyy HH:mm:ss.SSS} [%thread] %-5level %logger{36}.%M - %replace(%msg){'password=\S*', 'password=****'}%n
</pattern>
</encoder>
</appender>
<root level="info">
<appender-ref ref="console"/>
</root>
</springProfile>
对于登录数据库的密码,看来我们运气不好。我们能做的最好的事情就是将它放在一个单独的模式中,并具有访问这些 SCDF 表的特定权限。
团队在 SCDF V2.6.2 中修复了这个问题。数据库凭据不再暴露在日志、POD 描述页面或数据库中。默认情况下,凭据是可见的。因此,遇到此问题的任何人都必须将以下环境变量添加为部署配置的一部分并将值设置为 true。
SPRING_CLOUD_DATAFLOW_TASK_USE_KUBERNETES_SECRETS_FOR_DB_CREDENTIALS = true
我有 Custom Built SCDF,它在 Openshift 中构建为 docker 映像并在 server-deployment.yaml 中引用为 docker image.I 使用 Oracle 数据库存储任务元数据,这里是外部来源。我在 configmap 中传递了所有数据库属性。数据库密码是 base64 编码的,并作为秘密添加到配置映射中。 SCDF 正在使用这些数据库详细信息来存储任务元数据。
这些作业参数由 SCDF 传递给正在执行的 job.But 这些作业参数又是数据源属性,包括 configmap 中存在的数据库密码,作为作业参数打印在日志中,并且 batch_job_execution_params table。
我认为在 configmap 中使用密码作为秘密应该可以解决这个问题。但事实并非如此。下面是正在打印的作业参数的日志和 table 片段。
我想知道如何避免将这些数据库属性作为作业参数传递给正在执行的作业,以防止凭据被泄露?
12-06-2020 18:12:38.540 [main] INFO org.springframework.batch.core.launch.support.SimpleJobLauncher.run - Job:
[FlowJob: [name=Job]] launched with the following parameters: [{
-spring.cloud.task.executionid=8010,
-spring.cloud.data.flow.platformname=default,
-spring.datasource.username=ACTUAL_USERNAME,
-spring.cloud.task.name=Alljobs,
Job.ID=1591985558466,
-spring.datasource.password=ACTUAL_PASSWORD,
-spring.datasource.driverClassName=oracle.jdbc.OracleDriver,
-spring.datasource.url=DATASOURCE_URL,
-spring.batch.job.names=Job_1}]
为作业执行创建的 Pod - openshift 屏幕截图
数据库Table
Custom SCDF Dockerfile.yaml
===========================
FROM maven:3.5.2-jdk-8-alpine AS MAVEN_BUILD
COPY pom.xml /build/
COPY src /build/src/
WORKDIR /build/
RUN mvn package
FROM openjdk:8-jre-alpine
WORKDIR /app
COPY --from=MAVEN_BUILD /build/target/BatchAdmin-0.0.1-SNAPSHOT.jar /app/
ENTRYPOINT ["java", "-jar", "BatchAdmin-0.0.1-SNAPSHOT.jar"]
Deployment.yaml
===============
apiVersion: apps/v1
kind: Deployment
metadata:
name: scdf-server
labels:
app: scdf-server
spec:
selector:
matchLabels:
app: scdf-server
replicas: 1
template:
metadata:
labels:
app: scdf-server
spec:
containers:
- name: scdf-server
image: docker-registry.default.svc:5000/batchadmin/scdf-server #DockerImage
imagePullPolicy: Always
volumeMounts:
- name: config
mountPath: /config
readOnly: true
ports:
- containerPort: 80
livenessProbe:
httpGet:
path: /management/health
port: 80
initialDelaySeconds: 45
readinessProbe:
httpGet:
path: /management/info
port: 80
initialDelaySeconds: 45
resources:
limits:
cpu: 1.0
memory: 2048Mi
requests:
cpu: 0.5
memory: 1024Mi
env:
- name: KUBERNETES_NAMESPACE
valueFrom:
fieldRef:
fieldPath: "metadata.namespace"
- name: SERVER_PORT
value: '80'
- name: SPRING_CLOUD_CONFIG_ENABLED
value: 'false'
- name: SPRING_CLOUD_DATAFLOW_FEATURES_ANALYTICS_ENABLED
value: 'true'
- name: SPRING_CLOUD_DATAFLOW_FEATURES_SCHEDULES_ENABLED
value: 'true'
- name: SPRING_CLOUD_DATAFLOW_TASK_COMPOSED_TASK_RUNNER_URI
value: 'docker://springcloud/spring-cloud-dataflow-composed-task-runner:2.6.0.BUILD-SNAPSHOT'
- name: SPRING_CLOUD_KUBERNETES_CONFIG_ENABLE_API
value: 'true'
- name: SPRING_CLOUD_KUBERNETES_SECRETS_ENABLE_API
value: 'true'
- name: SPRING_CLOUD_KUBERNETES_SECRETS_PATHS
value: /etc/secrets
- name: SPRING_CLOUD_DATAFLOW_FEATURES_TASKS_ENABLED
value: 'true'
- name: SPRING_CLOUD_KUBERNETES_CONFIG_NAME
value: scdf-server
- name: SPRING_CLOUD_DATAFLOW_SERVER_URI
value: 'http://${SCDF_SERVER_SERVICE_HOST}:${SCDF_SERVER_SERVICE_PORT}'
# Add Maven repo for metadata artifact resolution for all stream apps
- name: SPRING_APPLICATION_JSON
value: "{ \"maven\": { \"local-repository\": null, \"remote-repositories\": { \"repo1\": { \"url\": \"https://repo.spring.io/libs-snapshot\"} } } }"
serviceAccountName: scdf-sa
volumes:
- name: config
configMap:
name: scdf-server
items:
- key: application.yaml
path: application.yaml
#- name: SPRING_CLOUD_DATAFLOW_FEATURES_TASKS_ENABLED
#value : 'true'
server-config.yaml
==================
apiVersion: v1
kind: ConfigMap
metadata:
name: scdf-server
labels:
app: scdf-server
data:
application.yaml: |-
spring:
cloud:
dataflow:
task:
platform:
kubernetes:
accounts:
default:
limits:
memory: 1024Mi
cpu: 2
entry-point-style: exec
image-pull-policy: always
datasource:
url: jdbc:oracle:thin:@db_url
username: BATCH_APP
password: ${oracle-root-password}
driver-class-name: oracle.jdbc.OracleDriver
testOnBorrow: true
validationQuery: "SELECT 1"
flyway:
enabled: false
jpa:
hibernate:
use-new-id-generator-mappings: true
oracle-secrets.yaml
===================
apiVersion: v1
kind: Secret
metadata:
name: oracle
labels:
app: oracle
data:
oracle-root-password: a2xldT3ederhgyzFCajE4YQ==
如有任何帮助,我们将不胜感激。谢谢
这不是一个完美的示例 - 但您可以使用 logback 的功能(Spring Boot 使用的默认日志记录库)屏蔽日志中的密码
将以下配置放入你的logback中,它会将密码替换为****
<springProfile name="local">
<include resource="org/springframework/boot/logging/logback/console-appender.xml"/>
<appender name="console" class="ch.qos.logback.core.ConsoleAppender">
<encoder>
<pattern>
%d{dd-MM-yyyy HH:mm:ss.SSS} [%thread] %-5level %logger{36}.%M - %replace(%msg){'password=\S*', 'password=****'}%n
</pattern>
</encoder>
</appender>
<root level="info">
<appender-ref ref="console"/>
</root>
</springProfile>
对于登录数据库的密码,看来我们运气不好。我们能做的最好的事情就是将它放在一个单独的模式中,并具有访问这些 SCDF 表的特定权限。
团队在 SCDF V2.6.2 中修复了这个问题。数据库凭据不再暴露在日志、POD 描述页面或数据库中。默认情况下,凭据是可见的。因此,遇到此问题的任何人都必须将以下环境变量添加为部署配置的一部分并将值设置为 true。
SPRING_CLOUD_DATAFLOW_TASK_USE_KUBERNETES_SECRETS_FOR_DB_CREDENTIALS = true