Spark 提交给 kubernetes:执行者未拉取包
Spark submit to kubernetes: packages not pulled by executors
我正在尝试使用 spark-submit 将我的 Pyspark 应用程序提交到 Kubernetes 集群 (Minikube):
./bin/spark-submit \
--master k8s://https://192.168.64.4:8443 \
--deploy-mode cluster \
--packages org.apache.spark:spark-sql-kafka-0-10_2.12:3.0.1 \
--conf spark.kubernetes.container.image='pyspark:dev' \
--conf spark.kubernetes.container.image.pullPolicy='Never' \
local:///main.py
应用程序尝试访问部署在集群内的 Kafka 实例,因此我指定了 jar 依赖项:
--packages org.apache.spark:spark-sql-kafka-0-10_2.12:3.0.1
我使用的容器映像基于我使用实用程序脚本构建的容器映像。我已经打包了我的应用程序需要的所有 python 依赖项。
驱动程序正确部署并获取 Kafka 包(如果需要,我可以提供日志)并在新的 pod 中启动执行程序。
但随后执行程序 pod 崩溃:
ERROR Executor: Exception in task 0.0 in stage 1.0 (TID 1)
java.lang.ClassNotFoundException: org.apache.spark.sql.kafka010.KafkaBatchInputPartition
at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
at org.apache.spark.serializer.JavaDeserializationStream$$anon.resolveClass(JavaSerializer.scala:68)
at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1986)
at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1850)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2160)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1667)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2405)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2329)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2187)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1667)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2405)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2329)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2187)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1667)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:503)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:461)
at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:76)
at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:115)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:407)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
所以我对执行程序 pod 进行了调查,发现 $SPARK_CLASSPATH 文件夹(设置为 ':/opt/spark/jars/*')
在构建 docker 图像时,是否还需要获取依赖项并将其包含在 spark jars 文件夹中? (我认为'--packages'选项也会让执行者检索指定的jar)
您是否按照文档 Docker images section 中所述从官方 Dockerfile (kubernetes/dockerfiles/spark/bindings/python/Dockerfile) 开始?
您还需要在与 Hadoop 兼容的文件系统上指定上传位置,并确保指定的 Ivy 主目录和缓存目录具有正确的权限,如 Dependency Management section.
中所述
文档中的示例:
...
--packages com.amazonaws:aws-java-sdk:1.7.4,org.apache.hadoop:hadoop-aws:2.7.6
--conf spark.kubernetes.file.upload.path=s3a://<s3-bucket>/path
--conf spark.hadoop.fs.s3a.access.key=...
--conf spark.hadoop.fs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem
--conf spark.hadoop.fs.s3a.fast.upload=true
--conf spark.hadoop.fs.s3a.secret.key=....
--conf spark.driver.extraJavaOptions="-Divy.cache.dir=/tmp -Divy.home=/tmp"
...
我正在尝试使用 spark-submit 将我的 Pyspark 应用程序提交到 Kubernetes 集群 (Minikube):
./bin/spark-submit \
--master k8s://https://192.168.64.4:8443 \
--deploy-mode cluster \
--packages org.apache.spark:spark-sql-kafka-0-10_2.12:3.0.1 \
--conf spark.kubernetes.container.image='pyspark:dev' \
--conf spark.kubernetes.container.image.pullPolicy='Never' \
local:///main.py
应用程序尝试访问部署在集群内的 Kafka 实例,因此我指定了 jar 依赖项:
--packages org.apache.spark:spark-sql-kafka-0-10_2.12:3.0.1
我使用的容器映像基于我使用实用程序脚本构建的容器映像。我已经打包了我的应用程序需要的所有 python 依赖项。
驱动程序正确部署并获取 Kafka 包(如果需要,我可以提供日志)并在新的 pod 中启动执行程序。
但随后执行程序 pod 崩溃:
ERROR Executor: Exception in task 0.0 in stage 1.0 (TID 1)
java.lang.ClassNotFoundException: org.apache.spark.sql.kafka010.KafkaBatchInputPartition
at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
at org.apache.spark.serializer.JavaDeserializationStream$$anon.resolveClass(JavaSerializer.scala:68)
at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1986)
at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1850)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2160)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1667)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2405)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2329)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2187)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1667)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2405)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2329)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2187)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1667)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:503)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:461)
at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:76)
at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:115)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:407)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
所以我对执行程序 pod 进行了调查,发现 $SPARK_CLASSPATH 文件夹(设置为 ':/opt/spark/jars/*')
在构建 docker 图像时,是否还需要获取依赖项并将其包含在 spark jars 文件夹中? (我认为'--packages'选项也会让执行者检索指定的jar)
您是否按照文档 Docker images section 中所述从官方 Dockerfile (kubernetes/dockerfiles/spark/bindings/python/Dockerfile) 开始? 您还需要在与 Hadoop 兼容的文件系统上指定上传位置,并确保指定的 Ivy 主目录和缓存目录具有正确的权限,如 Dependency Management section.
中所述文档中的示例:
...
--packages com.amazonaws:aws-java-sdk:1.7.4,org.apache.hadoop:hadoop-aws:2.7.6
--conf spark.kubernetes.file.upload.path=s3a://<s3-bucket>/path
--conf spark.hadoop.fs.s3a.access.key=...
--conf spark.hadoop.fs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem
--conf spark.hadoop.fs.s3a.fast.upload=true
--conf spark.hadoop.fs.s3a.secret.key=....
--conf spark.driver.extraJavaOptions="-Divy.cache.dir=/tmp -Divy.home=/tmp"
...