JcaPEMKeyConverter 由可选依赖项 BouncyCastle 提供。要使用对 EC 密钥的支持,您必须显式添加对类路径的依赖

JcaPEMKeyConverter is provided by BouncyCastle, an optional dependency. To use support for EC Keys you must explicitly add dependency to classpath

我有一个简单的 Flink 流应用程序。它在 start-cluster.sh 命令创建的集群中运行良好。

现在基于 macOS 上的 Flink tutorial, I hope to deploy it in application mode natively in a Kubernetes cluster created by k3d

首先,我通过 k3d cluster create dev 创建了一个集群。

这是我的Docker文件:

FROM flink
RUN mkdir -p $FLINK_HOME/usrlib
COPY target/streaming-0.1.jar $FLINK_HOME/usrlib/streaming-0.1.jar

我构建并将其推送到 Docker 集线器。

我的集群名称是k3d-dev,所以我运行

flink run-application \
    --target kubernetes-application \
    -Dkubernetes.cluster-id=k3d-dev \
    -Dkubernetes.container.image=hongbomiao/my-flink-xxx:latest \
    local:///opt/flink/usrlib/streaming-0.1.jar

但是,我得到了错误:

 The program finished with the following exception:

io.fabric8.kubernetes.client.KubernetesClientException: JcaPEMKeyConverter is provided by BouncyCastle, an optional dependency. To use support for EC Keys you must explicitly add this dependency to classpath.
    at io.fabric8.kubernetes.client.internal.CertUtils.handleECKey(CertUtils.java:161)
    at io.fabric8.kubernetes.client.internal.CertUtils.loadKey(CertUtils.java:131)
    at io.fabric8.kubernetes.client.internal.CertUtils.createKeyStore(CertUtils.java:111)
    at io.fabric8.kubernetes.client.internal.CertUtils.createKeyStore(CertUtils.java:243)
    at io.fabric8.kubernetes.client.internal.SSLUtils.keyManagers(SSLUtils.java:128)
    at io.fabric8.kubernetes.client.internal.SSLUtils.keyManagers(SSLUtils.java:122)
    at io.fabric8.kubernetes.client.utils.HttpClientUtils.createHttpClient(HttpClientUtils.java:82)
    at io.fabric8.kubernetes.client.utils.HttpClientUtils.createHttpClient(HttpClientUtils.java:62)
    at io.fabric8.kubernetes.client.BaseClient.<init>(BaseClient.java:51)
    at io.fabric8.kubernetes.client.DefaultKubernetesClient.<init>(DefaultKubernetesClient.java:105)
    at org.apache.flink.kubernetes.kubeclient.FlinkKubeClientFactory.fromConfiguration(FlinkKubeClientFactory.java:102)
    at org.apache.flink.kubernetes.KubernetesClusterClientFactory.createClusterDescriptor(KubernetesClusterClientFactory.java:61)
    at org.apache.flink.kubernetes.KubernetesClusterClientFactory.createClusterDescriptor(KubernetesClusterClientFactory.java:39)
    at org.apache.flink.client.deployment.application.cli.ApplicationClusterDeployer.run(ApplicationClusterDeployer.java:63)
    at org.apache.flink.client.cli.CliFrontend.runApplication(CliFrontend.java:213)
    at org.apache.flink.client.cli.CliFrontend.parseAndRun(CliFrontend.java:1057)
    at org.apache.flink.client.cli.CliFrontend.lambda$main(CliFrontend.java:1132)
    at org.apache.flink.runtime.security.contexts.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:28)
    at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1132)

看完

我加了

<dependency>
    <groupId>org.bouncycastle</groupId>
    <artifactId>bcpkix-jdk15on</artifactId>
    <version>1.69</version>
</dependency>
<dependency>
    <groupId>org.bouncycastle</groupId>
    <artifactId>bcprov-jdk15on</artifactId>
    <version>1.69</version>
</dependency>
<dependency>
    <groupId>org.bouncycastle</groupId>
    <artifactId>bcprov-ext-jdk15on</artifactId>
    <version>1.69</version>
</dependency>

到我的 pom.xml 文件。我再次构建并推送到 Docker Hub。

当我运行上面的Flink命令时,我仍然得到同样的错误。任何想法?谢谢!


更新 1:

除了上述 pom.xml 更改外,我手动下载了这 3 个 jar 并将我的 Docker 文件更改为

FROM flink
COPY lib/* $FLINK_HOME/lib
RUN mkdir -p $FLINK_HOME/usrlib
COPY target/streaming-0.1.jar $FLINK_HOME/usrlib/streaming-0.1.jar

再次尝试,还是一样的错误。

我可以确认 3 个 jar 文件 bcpkix-jdk15on-1.69.jarbcprov-ext-jdk15on-1.69.jarbcprov-jdk15on-1.69.jar 在 docker 图像中:

➜ docker run -it 6c48af48db55c334003a307d1ef7a5fc5181f389613284b66b5cb97588b9708d sh

$ cd lib && ls
bcpkix-jdk15on-1.69.jar      flink-dist_2.12-1.13.2.jar     flink-table_2.12-1.13.2.jar  log4j-slf4j-impl-2.12.1.jar
bcprov-ext-jdk15on-1.69.jar  flink-json-1.13.2.jar      log4j-1.2-api-2.12.1.jar
bcprov-jdk15on-1.69.jar      flink-shaded-zookeeper-3.4.14.jar  log4j-api-2.12.1.jar
flink-csv-1.13.2.jar         flink-table-blink_2.12-1.13.2.jar  log4j-core-2.12.1.jar
$ cd ../usrlib && ls
streaming-0.1.jar

更新 2:

我尝试通过

启动会话模式
/usr/local/Cellar/apache-flink/1.13.1/libexec/bin/kubernetes-session.sh

但仍然出现同样的错误。所以现在我可以确认我之前使用应用程序模式时,问题与我的Docker图像无关。

我的机器上的 ~/.m2 有那些罐子:

我有没有漏掉其他罐子?

此外,我发现错误只发生在 k3d/k3s 创建的集群上,但 minikube 不会。

检查

的代码后
  • /usr/local/Cellar/apache-flink/1.13.1/libexec/bin/kubernetes-session.sh
  • /usr/local/Cellar/apache-flink/1.13.1/libexec/libexec/kubernetes-session.sh

第一个脚本指向第二个脚本,第二个脚本有

# ...
CC_CLASSPATH=`manglePathList $(constructFlinkClassPath):$INTERNAL_HADOOP_CLASSPATHS`

# ...
"$JAVA_RUN" $JVM_ARGS -classpath "$CC_CLASSPATH" $log_setting org.apache.flink.kubernetes.cli.KubernetesSessionCli "$@"

我添加了echo $CC_CLASSPATH,并打印出类路径。

就我而言,它位于 /usr/local/Cellar/apache-flink/1.13.1/libexec/lib

我把bcprov-jdk15on-1.69.jarbcpkix-jdk15on-1.69.jar放到上面的文件夹后,Flink现在可以session和application模式部署到k3s(k3d)了

总结

下载 bcprov-jdk15onbcpkix-jdk15on jar 文件

然后移动到文件夹

/usr/local/Cellar/apache-flink/{version}/libexec/lib

那你就可以开始了。