运行 集群模式 spark 作业时如何修复 "Connection refused error"

How to fix "Connection refused error" when running a cluster mode spark job

我在使用 SLURM 作业管理系统的 uni 集群上使用 spark 运行ning terasort 基准测试。当我使用 --master local[8] 时它工作正常,但是当我将 master 设置为我的当前节点时,我收到连接被拒绝的错误。

我运行这个命令在本地启动应用程序没有问题:

> spark-submit \
    --class com.github.ehiggs.spark.terasort.TeraGen \
    --master local[8] \
    target/spark-terasort-1.1-SNAPSHOT-jar-with-dependencies.jar 1g \
    data/terasort_in

当我使用集群模式时,出现以下错误:

> spark-submit \
    --class com.github.ehiggs.spark.terasort.TeraGen \
    --master spark://iris-055:7077 \ #name of the cluster-node in use
    --deploy-mode cluster \
    --executor-memory 20G \
    --total-executor-cores 24 \
    target/spark-terasort-1.1-SNAPSHOT-jar-with-dependencies.jar 5g \
    data/terasort_in

输出:

WARN  NativeCodeLoader:62 - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Exception in thread "main" org.apache.spark.SparkException:  Exception thrown in awaitResult: 
    at
org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:226) 
    at 
.
.
./*many lines of timeout logs etc.*/
.
.
.
Caused by: java.net.ConnectException: Connection refused
... 11 more

我希望命令 运行 平滑并终止,但我无法克服此连接错误。

问题可能在于未定义 --conf 变量。这可以解决:

spark-submit \
    --class com.github.ehiggs.spark.terasort.TeraGen \
    --master spark://iris-055:7077 \
    --conf spark.driver.memory=4g \
    --conf spark.executor.memory=20g \
    --executor-memory 20g \
    --total-executor-cores 24 \
    target/spark-terasort-1.1-SNAPSHOT-jar-with-dependencies.jar 5g \
    data/terasort_in