通过 AWS [EMR] 提交 Spark 应用程序
Submit a Spark application via AWS [EMR]
您好,我是云计算的新手,所以对于(也许)这个愚蠢的问题,我深表歉意。我需要帮助才能知道我所做的是实际上是在集群上计算还是只是在主服务器上计算(无用的东西)。
我能做什么:
好吧,我可以使用 AWS 控制台设置一个由一定数量的节点组成的集群,并在所有节点上安装 Spark。我可以通过 SSH 连接到主节点。它需要什么然后它 运行 我的 jar 集群上的 Spark 代码。
我会做什么:
我会调用 spark-submit 到 运行 我的代码:
spark-submit --class cc.Main /home/ubuntu/MySparkCode.jar 3 [arguments]
我的疑惑:
是否需要用--master和
"spark://"高手参考?我在哪里可以找到
参考?我应该 运行 sbin/start-master.sh 中的脚本启动吗
一个独立的集群管理器还是已经设置好?如果我 运行 代码
上面我想代码 运行 只会在主机上本地,
对吗?
我可以只将我的输入文件保留在主节点上吗?假设我想要
计算一个巨大的文本文件的字数,我能不能只把它放在
主盘?或者为了保持并行性我需要一个
像 HDFS 这样的分布式内存?看不懂,我顶一下
如果合适,在主节点磁盘上。
非常感谢您的回复。
更新 1:
我尝试 运行 集群上的 Pi 示例,但无法得到结果。
$ sudo spark-submit --class org.apache.spark.examples.SparkPi --master yarn --deploy-mode cluster /usr/lib/spark/examples/jars/spark-examples.jar 10
我希望得到一行打印 Pi is roughly 3.14...
而不是我得到:
17/04/15 13:16:01 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
17/04/15 13:16:03 INFO RMProxy: Connecting to ResourceManager at ip-172-31-37-222.us-west-2.compute.internal/172.31.37.222:8032
17/04/15 13:16:03 INFO Client: Requesting a new application from cluster with 2 NodeManagers
17/04/15 13:16:03 INFO Client: Verifying our application has not requested more than the maximum memory capability of the cluster (5120 MB per container)
17/04/15 13:16:03 INFO Client: Will allocate AM container, with 5120 MB memory including 465 MB overhead
17/04/15 13:16:03 INFO Client: Setting up container launch context for our AM
17/04/15 13:16:03 INFO Client: Setting up the launch environment for our AM container
17/04/15 13:16:03 INFO Client: Preparing resources for our AM container
17/04/15 13:16:06 WARN Client: Neither spark.yarn.jars nor spark.yarn.archive is set, falling back to uploading libraries under SPARK_HOME.
17/04/15 13:16:10 INFO Client: Uploading resource file:/mnt/tmp/spark-aa757ca0-4ff7-460c-8bee-27bc8c8dada9/__spark_libs__5838015067814081789.zip -> hdfs://ip-172-31-37-222.us-west-2.compute.internal:8020/user/root/.sparkStaging/application_1492261407069_0007/__spark_libs__5838015067814081789.zip
17/04/15 13:16:12 INFO Client: Uploading resource file:/usr/lib/spark/examples/jars/spark-examples.jar -> hdfs://ip-172-31-37-222.us-west-2.compute.internal:8020/user/root/.sparkStaging/application_1492261407069_0007/spark-examples.jar
17/04/15 13:16:12 INFO Client: Uploading resource file:/mnt/tmp/spark-aa757ca0-4ff7-460c-8bee-27bc8c8dada9/__spark_conf__1370316719712336297.zip -> hdfs://ip-172-31-37-222.us-west-2.compute.internal:8020/user/root/.sparkStaging/application_1492261407069_0007/__spark_conf__.zip
17/04/15 13:16:13 INFO SecurityManager: Changing view acls to: root
17/04/15 13:16:13 INFO SecurityManager: Changing modify acls to: root
17/04/15 13:16:13 INFO SecurityManager: Changing view acls groups to:
17/04/15 13:16:13 INFO SecurityManager: Changing modify acls groups to:
17/04/15 13:16:13 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); groups with view permissions: Set(); users with modify permissions: Set(root); groups with modify permissions: Set()
17/04/15 13:16:13 INFO Client: Submitting application application_1492261407069_0007 to ResourceManager
17/04/15 13:16:13 INFO YarnClientImpl: Submitted application application_1492261407069_0007
17/04/15 13:16:14 INFO Client: Application report for application_1492261407069_0007 (state: ACCEPTED)
17/04/15 13:16:14 INFO Client:
client token: N/A
diagnostics: N/A
ApplicationMaster host: N/A
ApplicationMaster RPC port: -1
queue: default
start time: 1492262173096
final status: UNDEFINED
tracking URL: http://ip-172-31-37-222.us-west-2.compute.internal:20888/proxy/application_1492261407069_0007/
user: root
17/04/15 13:16:15 INFO Client: Application report for application_1492261407069_0007 (state: ACCEPTED)
17/04/15 13:16:24 INFO Client: Application report for application_1492261407069_0007 (state: ACCEPTED)
17/04/15 13:16:25 INFO Client: Application report for application_1492261407069_0007 (state: RUNNING)
17/04/15 13:16:25 INFO Client:
client token: N/A
diagnostics: N/A
ApplicationMaster host: 172.31.33.215
ApplicationMaster RPC port: 0
queue: default
start time: 1492262173096
final status: UNDEFINED
tracking URL: http://ip-172-31-37-222.us-west-2.compute.internal:20888/proxy/application_1492261407069_0007/
user: root
17/04/15 13:16:26 INFO Client: Application report for application_1492261407069_0007 (state: RUNNING)
17/04/15 13:16:55 INFO Client: Application report for application_1492261407069_0007 (state: RUNNING)
17/04/15 13:16:56 INFO Client: Application report for application_1492261407069_0007 (state: FINISHED)
17/04/15 13:16:56 INFO Client:
client token: N/A
diagnostics: N/A
ApplicationMaster host: 172.31.33.215
ApplicationMaster RPC port: 0
queue: default
start time: 1492262173096
final status: SUCCEEDED
tracking URL: http://ip-172-31-37-222.us-west-2.compute.internal:20888/proxy/application_1492261407069_0007/
user: root
17/04/15 13:16:56 INFO ShutdownHookManager: Shutdown hook called
17/04/15 13:16:56 INFO ShutdownHookManager: Deleting directory /mnt/tmp/spark-aa757ca0-4ff7-460c-8bee-27bc8c8dada9
第一个疑问的答案:
我假设您想 运行 在纱线上产生火花。
您可以只传递 --master yarn --deploy-mode cluster
,Spark 驱动程序 运行s 在由集群上的 YARN 管理的应用程序主进程中
spark-submit --master yarn --deploy-mode cluster \
--class cc.Main /home/ubuntu/MySparkCode.jar 3 [arguments]
Reference 其他模式
当您 运行 在 --deploy-mode 集群上工作时,您在 运行.[=19= 的机器上看不到输出(如果您正在打印某些东西) ]
原因:您正在 运行在集群模式下运行作业,因此主节点将 运行在集群中的一个节点上运行,输出将在同一台机器上发出。
要检查输出,您可以使用以下命令在应用程序日志中看到它。
yarn logs -applicationId application_id
第二个疑问的答案:
您可以将输入文件保存在任何地方(master node/HDFS)。
并行度完全取决于加载数据时创建的RDD/DataFrame分区数。
分区数取决于数据大小,但您可以在加载数据时通过传递参数来控制。
如果您正在从主服务器加载数据:
val rdd = sc.textFile("/home/ubumtu/input.txt",[number of partitions])
rdd
将根据您传递的分区数创建。如果您不传递多个分区,那么它将考虑在 spark conf.
中配置的 spark.default.parallelism
如果您从 HDFS 加载数据:
val rdd = sc.textFile("hdfs://namenode:8020/data/input.txt")
rdd
将使用等于 HDFS 内块数的分区数创建。
希望我的回答对你有帮助。
你可以使用这个:
spark-submit --deploy-mode client --executor-memory 4g --class org.apache.spark.examples.SparkPi /usr/lib/spark/examples/jars/spark-examples.jar
您好,我是云计算的新手,所以对于(也许)这个愚蠢的问题,我深表歉意。我需要帮助才能知道我所做的是实际上是在集群上计算还是只是在主服务器上计算(无用的东西)。
我能做什么: 好吧,我可以使用 AWS 控制台设置一个由一定数量的节点组成的集群,并在所有节点上安装 Spark。我可以通过 SSH 连接到主节点。它需要什么然后它 运行 我的 jar 集群上的 Spark 代码。
我会做什么: 我会调用 spark-submit 到 运行 我的代码:
spark-submit --class cc.Main /home/ubuntu/MySparkCode.jar 3 [arguments]
我的疑惑:
是否需要用--master和 "spark://"高手参考?我在哪里可以找到 参考?我应该 运行 sbin/start-master.sh 中的脚本启动吗 一个独立的集群管理器还是已经设置好?如果我 运行 代码 上面我想代码 运行 只会在主机上本地, 对吗?
我可以只将我的输入文件保留在主节点上吗?假设我想要 计算一个巨大的文本文件的字数,我能不能只把它放在 主盘?或者为了保持并行性我需要一个 像 HDFS 这样的分布式内存?看不懂,我顶一下 如果合适,在主节点磁盘上。
非常感谢您的回复。
更新 1: 我尝试 运行 集群上的 Pi 示例,但无法得到结果。
$ sudo spark-submit --class org.apache.spark.examples.SparkPi --master yarn --deploy-mode cluster /usr/lib/spark/examples/jars/spark-examples.jar 10
我希望得到一行打印 Pi is roughly 3.14...
而不是我得到:
17/04/15 13:16:01 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
17/04/15 13:16:03 INFO RMProxy: Connecting to ResourceManager at ip-172-31-37-222.us-west-2.compute.internal/172.31.37.222:8032
17/04/15 13:16:03 INFO Client: Requesting a new application from cluster with 2 NodeManagers
17/04/15 13:16:03 INFO Client: Verifying our application has not requested more than the maximum memory capability of the cluster (5120 MB per container)
17/04/15 13:16:03 INFO Client: Will allocate AM container, with 5120 MB memory including 465 MB overhead
17/04/15 13:16:03 INFO Client: Setting up container launch context for our AM
17/04/15 13:16:03 INFO Client: Setting up the launch environment for our AM container
17/04/15 13:16:03 INFO Client: Preparing resources for our AM container
17/04/15 13:16:06 WARN Client: Neither spark.yarn.jars nor spark.yarn.archive is set, falling back to uploading libraries under SPARK_HOME.
17/04/15 13:16:10 INFO Client: Uploading resource file:/mnt/tmp/spark-aa757ca0-4ff7-460c-8bee-27bc8c8dada9/__spark_libs__5838015067814081789.zip -> hdfs://ip-172-31-37-222.us-west-2.compute.internal:8020/user/root/.sparkStaging/application_1492261407069_0007/__spark_libs__5838015067814081789.zip
17/04/15 13:16:12 INFO Client: Uploading resource file:/usr/lib/spark/examples/jars/spark-examples.jar -> hdfs://ip-172-31-37-222.us-west-2.compute.internal:8020/user/root/.sparkStaging/application_1492261407069_0007/spark-examples.jar
17/04/15 13:16:12 INFO Client: Uploading resource file:/mnt/tmp/spark-aa757ca0-4ff7-460c-8bee-27bc8c8dada9/__spark_conf__1370316719712336297.zip -> hdfs://ip-172-31-37-222.us-west-2.compute.internal:8020/user/root/.sparkStaging/application_1492261407069_0007/__spark_conf__.zip
17/04/15 13:16:13 INFO SecurityManager: Changing view acls to: root
17/04/15 13:16:13 INFO SecurityManager: Changing modify acls to: root
17/04/15 13:16:13 INFO SecurityManager: Changing view acls groups to:
17/04/15 13:16:13 INFO SecurityManager: Changing modify acls groups to:
17/04/15 13:16:13 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); groups with view permissions: Set(); users with modify permissions: Set(root); groups with modify permissions: Set()
17/04/15 13:16:13 INFO Client: Submitting application application_1492261407069_0007 to ResourceManager
17/04/15 13:16:13 INFO YarnClientImpl: Submitted application application_1492261407069_0007
17/04/15 13:16:14 INFO Client: Application report for application_1492261407069_0007 (state: ACCEPTED)
17/04/15 13:16:14 INFO Client:
client token: N/A
diagnostics: N/A
ApplicationMaster host: N/A
ApplicationMaster RPC port: -1
queue: default
start time: 1492262173096
final status: UNDEFINED
tracking URL: http://ip-172-31-37-222.us-west-2.compute.internal:20888/proxy/application_1492261407069_0007/
user: root
17/04/15 13:16:15 INFO Client: Application report for application_1492261407069_0007 (state: ACCEPTED)
17/04/15 13:16:24 INFO Client: Application report for application_1492261407069_0007 (state: ACCEPTED)
17/04/15 13:16:25 INFO Client: Application report for application_1492261407069_0007 (state: RUNNING)
17/04/15 13:16:25 INFO Client:
client token: N/A
diagnostics: N/A
ApplicationMaster host: 172.31.33.215
ApplicationMaster RPC port: 0
queue: default
start time: 1492262173096
final status: UNDEFINED
tracking URL: http://ip-172-31-37-222.us-west-2.compute.internal:20888/proxy/application_1492261407069_0007/
user: root
17/04/15 13:16:26 INFO Client: Application report for application_1492261407069_0007 (state: RUNNING)
17/04/15 13:16:55 INFO Client: Application report for application_1492261407069_0007 (state: RUNNING)
17/04/15 13:16:56 INFO Client: Application report for application_1492261407069_0007 (state: FINISHED)
17/04/15 13:16:56 INFO Client:
client token: N/A
diagnostics: N/A
ApplicationMaster host: 172.31.33.215
ApplicationMaster RPC port: 0
queue: default
start time: 1492262173096
final status: SUCCEEDED
tracking URL: http://ip-172-31-37-222.us-west-2.compute.internal:20888/proxy/application_1492261407069_0007/
user: root
17/04/15 13:16:56 INFO ShutdownHookManager: Shutdown hook called
17/04/15 13:16:56 INFO ShutdownHookManager: Deleting directory /mnt/tmp/spark-aa757ca0-4ff7-460c-8bee-27bc8c8dada9
第一个疑问的答案:
我假设您想 运行 在纱线上产生火花。
您可以只传递 --master yarn --deploy-mode cluster
,Spark 驱动程序 运行s 在由集群上的 YARN 管理的应用程序主进程中
spark-submit --master yarn --deploy-mode cluster \
--class cc.Main /home/ubuntu/MySparkCode.jar 3 [arguments]
Reference 其他模式
当您 运行 在 --deploy-mode 集群上工作时,您在 运行.[=19= 的机器上看不到输出(如果您正在打印某些东西) ]
原因:您正在 运行在集群模式下运行作业,因此主节点将 运行在集群中的一个节点上运行,输出将在同一台机器上发出。
要检查输出,您可以使用以下命令在应用程序日志中看到它。
yarn logs -applicationId application_id
第二个疑问的答案:
您可以将输入文件保存在任何地方(master node/HDFS)。
并行度完全取决于加载数据时创建的RDD/DataFrame分区数。 分区数取决于数据大小,但您可以在加载数据时通过传递参数来控制。
如果您正在从主服务器加载数据:
val rdd = sc.textFile("/home/ubumtu/input.txt",[number of partitions])
rdd
将根据您传递的分区数创建。如果您不传递多个分区,那么它将考虑在 spark conf.
spark.default.parallelism
如果您从 HDFS 加载数据:
val rdd = sc.textFile("hdfs://namenode:8020/data/input.txt")
rdd
将使用等于 HDFS 内块数的分区数创建。
希望我的回答对你有帮助。
你可以使用这个:
spark-submit --deploy-mode client --executor-memory 4g --class org.apache.spark.examples.SparkPi /usr/lib/spark/examples/jars/spark-examples.jar