PySpark 在 kubernetes 上使用 Spark-Submit 打包安装:ivy-cache file not found 错误
PySpark packages installation on kubernetes with Spark-Submit: ivy-cache file not found error
我整天都在与它作斗争。我能够安装和使用带有 spark shell 或连接的 Jupiter notebook 的包(graphframes),但我想使用 spark-submit 将它移动到基于 kubernetes 的 spark 环境。
我的 spark 版本:3.0.1
我从 spark-packages 下载了 last available .jar file (graphframes-0.8.1-spark3.0-s_2.12.jar) 并将其放入 jars 文件夹.我使用标准 spark docker 文件的变体来构建我的图像。
我的 spark-submit 命令如下所示:
$SPARK_HOME/bin/spark-submit \
--master k8s://https://kubernetes.docker.internal:6443 \
--deploy-mode cluster \
--conf spark.executor.instances= \
--conf spark.kubernetes.container.image=myimage.io/repositorypath \
--packages graphframes:graphframes:0.8.1-spark3.0-s_2.12 \
--jars "local:///opt/spark/jars/graphframes-0.8.1-spark3.0-s_2.12.jar" \
path/to/my/script/script.py
但是以错误结尾:
Ivy Default Cache set to: /opt/spark/.ivy2/cache
The jars for the packages stored in: /opt/spark/.ivy2/jars
:: loading settings :: url = jar:file:/opt/spark/jars/ivy-2.4.0.jar!/org/apache/ivy/core/settings/ivysettings.xml
graphframes#graphframes added as a dependency
:: resolving dependencies :: org.apache.spark#spark-submit-parent-e833e157-44f5-4055-81a4-3ab524176ef5;1.0
confs: [default]
Exception in thread "main" java.io.FileNotFoundException: /opt/spark/.ivy2/cache/resolved-org.apache.spark-spark-submit-parent-e833e157-44f5-4055-81a4-3ab524176ef5-1.0.xml (No such file or directory)
以下是我的日志:
(base) konstantinigin@Konstantins-MBP spark-3.0.1-bin-hadoop3.2 % kubectl logs scalableapp-py-7669dd784bd59f67-driver
++ id -u
+ myuid=185
++ id -g
+ mygid=0
+ set +e
++ getent passwd 185
+ uidentry=
+ set -e
+ '[' -z '' ']'
+ '[' -w /etc/passwd ']'
+ echo '185:x:185:0:anonymous uid:/opt/spark:/bin/false'
+ SPARK_CLASSPATH=':/opt/spark/jars/*'
+ env
+ sort -t_ -k4 -n
+ grep SPARK_JAVA_OPT_
+ sed 's/[^=]*=\(.*\)//g'
+ readarray -t SPARK_EXECUTOR_JAVA_OPTS
+ '[' -n '' ']'
+ '[' 3 == 2 ']'
+ '[' 3 == 3 ']'
++ python3 -V
+ pyv3='Python 3.7.3'
+ export PYTHON_VERSION=3.7.3
+ PYTHON_VERSION=3.7.3
+ export PYSPARK_PYTHON=python3
+ PYSPARK_PYTHON=python3
+ export PYSPARK_DRIVER_PYTHON=python3
+ PYSPARK_DRIVER_PYTHON=python3
+ '[' -n '' ']'
+ '[' -z ']'
+ case "" in
+ shift 1
+ CMD=("$SPARK_HOME/bin/spark-submit" --conf "spark.driver.bindAddress=$SPARK_DRIVER_BIND_ADDRESS" --deploy-mode client "$@")
+ exec /usr/bin/tini -s -- /opt/spark/bin/spark-submit --conf spark.driver.bindAddress=10.1.2.145 --deploy-mode client --properties-file /opt/spark/conf/spark.properties --class org.apache.spark.deploy.PythonRunner local:///opt/spark/data/ScalableApp.py --number_of_executors 2 --dataset USAir --links 100
Ivy Default Cache set to: /opt/spark/.ivy2/cache
The jars for the packages stored in: /opt/spark/.ivy2/jars
:: loading settings :: url = jar:file:/opt/spark/jars/ivy-2.4.0.jar!/org/apache/ivy/core/settings/ivysettings.xml
graphframes#graphframes added as a dependency
:: resolving dependencies :: org.apache.spark#spark-submit-parent-e833e157-44f5-4055-81a4-3ab524176ef5;1.0
confs: [default]
Exception in thread "main" java.io.FileNotFoundException: /opt/spark/.ivy2/cache/resolved-org.apache.spark-spark-submit-parent-e833e157-44f5-4055-81a4-3ab524176ef5-1.0.xml (No such file or directory)
at java.io.FileOutputStream.open0(Native Method)
at java.io.FileOutputStream.open(FileOutputStream.java:270)
at java.io.FileOutputStream.<init>(FileOutputStream.java:213)
at java.io.FileOutputStream.<init>(FileOutputStream.java:162)
at org.apache.ivy.plugins.parser.xml.XmlModuleDescriptorWriter.write(XmlModuleDescriptorWriter.java:70)
at org.apache.ivy.plugins.parser.xml.XmlModuleDescriptorWriter.write(XmlModuleDescriptorWriter.java:62)
at org.apache.ivy.core.module.descriptor.DefaultModuleDescriptor.toIvyFile(DefaultModuleDescriptor.java:563)
at org.apache.ivy.core.cache.DefaultResolutionCacheManager.saveResolvedModuleDescriptor(DefaultResolutionCacheManager.java:176)
at org.apache.ivy.core.resolve.ResolveEngine.resolve(ResolveEngine.java:245)
at org.apache.ivy.Ivy.resolve(Ivy.java:523)
at org.apache.spark.deploy.SparkSubmitUtils$.resolveMavenCoordinates(SparkSubmit.scala:1387)
at org.apache.spark.deploy.DependencyUtils$.resolveMavenDependencies(DependencyUtils.scala:54)
at org.apache.spark.deploy.SparkSubmit.prepareSubmitEnvironment(SparkSubmit.scala:308)
at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:871)
at org.apache.spark.deploy.SparkSubmit.doRunMain(SparkSubmit.scala:180)
at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)
at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)
at org.apache.spark.deploy.SparkSubmit$$anon.doSubmit(SparkSubmit.scala:1007)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1016)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
有人有熟悉的东西吗?也许你知道我在这里做错了什么?
好的,我解决了我的问题。不确定它是否适用于其他包,但它让我在提到的设置中 运行 graphframes:
- 从 spark-packages 下载最新的 .jar 文件
- 删除其名称的版本部分,仅保留包名称。在我的例子中是:
mv ./graphframes-0.8.1-spark3.0-s_2.12.jar ./graphframes.jar
- 使用 jar 命令解压它:
# Extract jar contents
jar xf graphframes.jar
第一点来了。我将我使用的所有包放在一个依赖项文件夹中,然后以压缩形式提交给 kubernetes。这个文件夹背后的逻辑在我的另一个问题中有解释,我再次回答了自己。 。
现在,我使用 jar 命令将上一步中提取的内容中的 graphframes 文件夹复制到我的 dependencies 文件夹中:
4. 将之前提取的内容中的 graphframes 文件夹复制到依赖项文件夹
cp -r ./graphframes $SPARK_HOME/path/to/your/dependencies
- 将原始 .jar 文件添加到 $SPARK_HOME
的 jars 文件夹中
- 将 --jars 添加到指向新 .jar 文件的 spark-submit 命令中:
$SPARK_HOME/bin/spark-submit \
--master k8s://https://kubernetes.docker.internal:6443 \
--deploy-mode cluster \
--conf spark.executor.instances= \
--conf spark.kubernetes.container.image=docker.io/path/to/your/image \
--jars "local:///opt/spark/jars/graphframes.jar" \ ...
- 包括你的依赖项
我现在很着急,但在不久的将来我会编辑这篇文章 post,在一篇关于在 py 中处理依赖关系的短篇中篇文章中添加一篇 link -spark. 希望它对某人有用:)
这似乎是一个正在解决的已知 spark 问题
我设法解决了一个类似的问题,我无法使用 --package 标志下载 hadoop-azure 罐子。这绝对是一种解决方法,但它确实有效。
我修改了 PySpark Docker 容器,将入口点更改为:
ENTRYPOINT [ "/opt/entrypoint.sh" ]
现在我可以 运行 容器而无需立即退出:
docker run -td <docker_image_id>
并且可以通过 ssh 进入:
docker exec -it <docker_container_id> /bin/bash
此时我可以使用 --package 标志在容器内提交 spark 作业:
$SPARK_HOME/bin/spark-submit \
--master local[*] \
--deploy-mode client \
--name spark-python \
--packages org.apache.hadoop:hadoop-azure:3.2.0 \
--conf spark.hadoop.fs.azure.account.auth.type.user.dfs.core.windows.net=SharedKey \
--conf spark.hadoop.fs.azure.account.key.user.dfs.core.windows.net=xxx \
--files "abfss://data@user.dfs.core.windows.net/config.yml" \
--py-files "abfss://data@user.dfs.core.windows.net/jobs.zip" \
"abfss://data@user.dfs.core.windows.net/main.py"
Spark随后下载了需要的依赖,并保存在容器中的/root/.ivy2下,并成功执行了作业。
我将整个文件夹从容器复制到主机上:
sudo docker cp <docker_container_id>:/root/.ivy2/ /opt/spark/.ivy2/
并再次修改Docker文件,将文件夹复制到镜像中:
COPY .ivy2 /root/.ivy2
最后,我可以使用这个新构建的映像和预期的 运行 将作业提交给 Kubernetes。
使用 spark submit 添加此配置对我有用:
spark-submit \
--conf spark.driver.extraJavaOptions="-Divy.cache.dir=/tmp -Divy.home=/tmp" \
我整天都在与它作斗争。我能够安装和使用带有 spark shell 或连接的 Jupiter notebook 的包(graphframes),但我想使用 spark-submit 将它移动到基于 kubernetes 的 spark 环境。 我的 spark 版本:3.0.1 我从 spark-packages 下载了 last available .jar file (graphframes-0.8.1-spark3.0-s_2.12.jar) 并将其放入 jars 文件夹.我使用标准 spark docker 文件的变体来构建我的图像。 我的 spark-submit 命令如下所示:
$SPARK_HOME/bin/spark-submit \
--master k8s://https://kubernetes.docker.internal:6443 \
--deploy-mode cluster \
--conf spark.executor.instances= \
--conf spark.kubernetes.container.image=myimage.io/repositorypath \
--packages graphframes:graphframes:0.8.1-spark3.0-s_2.12 \
--jars "local:///opt/spark/jars/graphframes-0.8.1-spark3.0-s_2.12.jar" \
path/to/my/script/script.py
但是以错误结尾:
Ivy Default Cache set to: /opt/spark/.ivy2/cache
The jars for the packages stored in: /opt/spark/.ivy2/jars
:: loading settings :: url = jar:file:/opt/spark/jars/ivy-2.4.0.jar!/org/apache/ivy/core/settings/ivysettings.xml
graphframes#graphframes added as a dependency
:: resolving dependencies :: org.apache.spark#spark-submit-parent-e833e157-44f5-4055-81a4-3ab524176ef5;1.0
confs: [default]
Exception in thread "main" java.io.FileNotFoundException: /opt/spark/.ivy2/cache/resolved-org.apache.spark-spark-submit-parent-e833e157-44f5-4055-81a4-3ab524176ef5-1.0.xml (No such file or directory)
以下是我的日志:
(base) konstantinigin@Konstantins-MBP spark-3.0.1-bin-hadoop3.2 % kubectl logs scalableapp-py-7669dd784bd59f67-driver
++ id -u
+ myuid=185
++ id -g
+ mygid=0
+ set +e
++ getent passwd 185
+ uidentry=
+ set -e
+ '[' -z '' ']'
+ '[' -w /etc/passwd ']'
+ echo '185:x:185:0:anonymous uid:/opt/spark:/bin/false'
+ SPARK_CLASSPATH=':/opt/spark/jars/*'
+ env
+ sort -t_ -k4 -n
+ grep SPARK_JAVA_OPT_
+ sed 's/[^=]*=\(.*\)//g'
+ readarray -t SPARK_EXECUTOR_JAVA_OPTS
+ '[' -n '' ']'
+ '[' 3 == 2 ']'
+ '[' 3 == 3 ']'
++ python3 -V
+ pyv3='Python 3.7.3'
+ export PYTHON_VERSION=3.7.3
+ PYTHON_VERSION=3.7.3
+ export PYSPARK_PYTHON=python3
+ PYSPARK_PYTHON=python3
+ export PYSPARK_DRIVER_PYTHON=python3
+ PYSPARK_DRIVER_PYTHON=python3
+ '[' -n '' ']'
+ '[' -z ']'
+ case "" in
+ shift 1
+ CMD=("$SPARK_HOME/bin/spark-submit" --conf "spark.driver.bindAddress=$SPARK_DRIVER_BIND_ADDRESS" --deploy-mode client "$@")
+ exec /usr/bin/tini -s -- /opt/spark/bin/spark-submit --conf spark.driver.bindAddress=10.1.2.145 --deploy-mode client --properties-file /opt/spark/conf/spark.properties --class org.apache.spark.deploy.PythonRunner local:///opt/spark/data/ScalableApp.py --number_of_executors 2 --dataset USAir --links 100
Ivy Default Cache set to: /opt/spark/.ivy2/cache
The jars for the packages stored in: /opt/spark/.ivy2/jars
:: loading settings :: url = jar:file:/opt/spark/jars/ivy-2.4.0.jar!/org/apache/ivy/core/settings/ivysettings.xml
graphframes#graphframes added as a dependency
:: resolving dependencies :: org.apache.spark#spark-submit-parent-e833e157-44f5-4055-81a4-3ab524176ef5;1.0
confs: [default]
Exception in thread "main" java.io.FileNotFoundException: /opt/spark/.ivy2/cache/resolved-org.apache.spark-spark-submit-parent-e833e157-44f5-4055-81a4-3ab524176ef5-1.0.xml (No such file or directory)
at java.io.FileOutputStream.open0(Native Method)
at java.io.FileOutputStream.open(FileOutputStream.java:270)
at java.io.FileOutputStream.<init>(FileOutputStream.java:213)
at java.io.FileOutputStream.<init>(FileOutputStream.java:162)
at org.apache.ivy.plugins.parser.xml.XmlModuleDescriptorWriter.write(XmlModuleDescriptorWriter.java:70)
at org.apache.ivy.plugins.parser.xml.XmlModuleDescriptorWriter.write(XmlModuleDescriptorWriter.java:62)
at org.apache.ivy.core.module.descriptor.DefaultModuleDescriptor.toIvyFile(DefaultModuleDescriptor.java:563)
at org.apache.ivy.core.cache.DefaultResolutionCacheManager.saveResolvedModuleDescriptor(DefaultResolutionCacheManager.java:176)
at org.apache.ivy.core.resolve.ResolveEngine.resolve(ResolveEngine.java:245)
at org.apache.ivy.Ivy.resolve(Ivy.java:523)
at org.apache.spark.deploy.SparkSubmitUtils$.resolveMavenCoordinates(SparkSubmit.scala:1387)
at org.apache.spark.deploy.DependencyUtils$.resolveMavenDependencies(DependencyUtils.scala:54)
at org.apache.spark.deploy.SparkSubmit.prepareSubmitEnvironment(SparkSubmit.scala:308)
at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:871)
at org.apache.spark.deploy.SparkSubmit.doRunMain(SparkSubmit.scala:180)
at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)
at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)
at org.apache.spark.deploy.SparkSubmit$$anon.doSubmit(SparkSubmit.scala:1007)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1016)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
有人有熟悉的东西吗?也许你知道我在这里做错了什么?
好的,我解决了我的问题。不确定它是否适用于其他包,但它让我在提到的设置中 运行 graphframes:
- 从 spark-packages 下载最新的 .jar 文件
- 删除其名称的版本部分,仅保留包名称。在我的例子中是:
mv ./graphframes-0.8.1-spark3.0-s_2.12.jar ./graphframes.jar
- 使用 jar 命令解压它:
# Extract jar contents
jar xf graphframes.jar
第一点来了。我将我使用的所有包放在一个依赖项文件夹中,然后以压缩形式提交给 kubernetes。这个文件夹背后的逻辑在我的另一个问题中有解释,我再次回答了自己。
cp -r ./graphframes $SPARK_HOME/path/to/your/dependencies
- 将原始 .jar 文件添加到 $SPARK_HOME 的 jars 文件夹中
- 将 --jars 添加到指向新 .jar 文件的 spark-submit 命令中:
$SPARK_HOME/bin/spark-submit \
--master k8s://https://kubernetes.docker.internal:6443 \
--deploy-mode cluster \
--conf spark.executor.instances= \
--conf spark.kubernetes.container.image=docker.io/path/to/your/image \
--jars "local:///opt/spark/jars/graphframes.jar" \ ...
- 包括你的依赖项
我现在很着急,但在不久的将来我会编辑这篇文章 post,在一篇关于在 py 中处理依赖关系的短篇中篇文章中添加一篇 link -spark. 希望它对某人有用:)
这似乎是一个正在解决的已知 spark 问题
我设法解决了一个类似的问题,我无法使用 --package 标志下载 hadoop-azure 罐子。这绝对是一种解决方法,但它确实有效。
我修改了 PySpark Docker 容器,将入口点更改为:
ENTRYPOINT [ "/opt/entrypoint.sh" ]
现在我可以 运行 容器而无需立即退出:
docker run -td <docker_image_id>
并且可以通过 ssh 进入:
docker exec -it <docker_container_id> /bin/bash
此时我可以使用 --package 标志在容器内提交 spark 作业:
$SPARK_HOME/bin/spark-submit \
--master local[*] \
--deploy-mode client \
--name spark-python \
--packages org.apache.hadoop:hadoop-azure:3.2.0 \
--conf spark.hadoop.fs.azure.account.auth.type.user.dfs.core.windows.net=SharedKey \
--conf spark.hadoop.fs.azure.account.key.user.dfs.core.windows.net=xxx \
--files "abfss://data@user.dfs.core.windows.net/config.yml" \
--py-files "abfss://data@user.dfs.core.windows.net/jobs.zip" \
"abfss://data@user.dfs.core.windows.net/main.py"
Spark随后下载了需要的依赖,并保存在容器中的/root/.ivy2下,并成功执行了作业。
我将整个文件夹从容器复制到主机上:
sudo docker cp <docker_container_id>:/root/.ivy2/ /opt/spark/.ivy2/
并再次修改Docker文件,将文件夹复制到镜像中:
COPY .ivy2 /root/.ivy2
最后,我可以使用这个新构建的映像和预期的 运行 将作业提交给 Kubernetes。
使用 spark submit 添加此配置对我有用:
spark-submit \
--conf spark.driver.extraJavaOptions="-Divy.cache.dir=/tmp -Divy.home=/tmp" \