启用动态分配时,为什么 YARN 无法获取任何执行器?
Why can't YARN acquire any executor when dynamic allocation is enabled?
使用 YARN 而不 启用动态分配功能时,作业工作顺利。我正在使用 Spark 1.4.0.
这就是我想要做的:
rdd = sc.parallelize(range(1000000))
rdd.first()
这是我在日志中得到的:
15/09/08 11:36:12 INFO SparkContext: Starting job: runJob at PythonRDD.scala:366
15/09/08 11:36:12 INFO DAGScheduler: Got job 0 (runJob at PythonRDD.scala:366) with 1 output partitions (allowLocal=true)
15/09/08 11:36:12 INFO DAGScheduler: Final stage: ResultStage 0(runJob at PythonRDD.scala:366)
15/09/08 11:36:12 INFO DAGScheduler: Parents of final stage: List()
15/09/08 11:36:12 INFO DAGScheduler: Missing parents: List()
15/09/08 11:36:12 INFO DAGScheduler: Submitting ResultStage 0 (PythonRDD[1] at RDD at PythonRDD.scala:43), which has no missing parents
15/09/08 11:36:13 INFO MemoryStore: ensureFreeSpace(3560) called with curMem=0, maxMem=278302556
15/09/08 11:36:13 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 3.5 KB, free 265.4 MB)
15/09/08 11:36:13 INFO MemoryStore: ensureFreeSpace(2241) called with curMem=3560, maxMem=278302556
15/09/08 11:36:13 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 2.2 KB, free 265.4 MB)
15/09/08 11:36:13 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 192.1.5.212:50079 (size: 2.2 KB, free: 265.4 MB)
15/09/08 11:36:13 INFO SparkContext: Created broadcast 0 from broadcast at DAGScheduler.scala:874
15/09/08 11:36:13 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 0 (PythonRDD[1] at RDD at PythonRDD.scala:43)
15/09/08 11:36:13 INFO YarnScheduler: Adding task set 0.0 with 1 tasks
15/09/08 11:36:14 INFO ExecutorAllocationManager: Requesting 1 new executor because tasks are backlogged (new desired total will be 1)
15/09/08 11:36:28 WARN YarnScheduler: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
15/09/08 11:36:43 WARN YarnScheduler: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
...
这里是集群 UI:
的屏幕截图
谁能给我一个解决方案?即使是潜在客户也将不胜感激。
我解决了问题,结果发现问题与资源可用性没有直接关系。要使用动态分配,yarn需要对外使用spark的shuffle服务,而不是MapReduce的shuffle。为了更好地理解动态分配,我建议您阅读 this。
使用 YARN 而不 启用动态分配功能时,作业工作顺利。我正在使用 Spark 1.4.0.
这就是我想要做的:
rdd = sc.parallelize(range(1000000))
rdd.first()
这是我在日志中得到的:
15/09/08 11:36:12 INFO SparkContext: Starting job: runJob at PythonRDD.scala:366
15/09/08 11:36:12 INFO DAGScheduler: Got job 0 (runJob at PythonRDD.scala:366) with 1 output partitions (allowLocal=true)
15/09/08 11:36:12 INFO DAGScheduler: Final stage: ResultStage 0(runJob at PythonRDD.scala:366)
15/09/08 11:36:12 INFO DAGScheduler: Parents of final stage: List()
15/09/08 11:36:12 INFO DAGScheduler: Missing parents: List()
15/09/08 11:36:12 INFO DAGScheduler: Submitting ResultStage 0 (PythonRDD[1] at RDD at PythonRDD.scala:43), which has no missing parents
15/09/08 11:36:13 INFO MemoryStore: ensureFreeSpace(3560) called with curMem=0, maxMem=278302556
15/09/08 11:36:13 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 3.5 KB, free 265.4 MB)
15/09/08 11:36:13 INFO MemoryStore: ensureFreeSpace(2241) called with curMem=3560, maxMem=278302556
15/09/08 11:36:13 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 2.2 KB, free 265.4 MB)
15/09/08 11:36:13 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 192.1.5.212:50079 (size: 2.2 KB, free: 265.4 MB)
15/09/08 11:36:13 INFO SparkContext: Created broadcast 0 from broadcast at DAGScheduler.scala:874
15/09/08 11:36:13 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 0 (PythonRDD[1] at RDD at PythonRDD.scala:43)
15/09/08 11:36:13 INFO YarnScheduler: Adding task set 0.0 with 1 tasks
15/09/08 11:36:14 INFO ExecutorAllocationManager: Requesting 1 new executor because tasks are backlogged (new desired total will be 1)
15/09/08 11:36:28 WARN YarnScheduler: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
15/09/08 11:36:43 WARN YarnScheduler: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
...
这里是集群 UI:
的屏幕截图谁能给我一个解决方案?即使是潜在客户也将不胜感激。
我解决了问题,结果发现问题与资源可用性没有直接关系。要使用动态分配,yarn需要对外使用spark的shuffle服务,而不是MapReduce的shuffle。为了更好地理解动态分配,我建议您阅读 this。