作业 65 取消,因为 SparkContext 已关闭
Job 65 cancelled because SparkContext was shut down
我正在使用共享的 Apache Zeppelin 服务器。几乎每天,我都会尝试 运行 一个命令并得到这个错误:Job 65 cancelled because SparkContext was shut down
我很想了解有关导致 SparkContext 关闭的原因的更多信息。我的理解是 Zeppelin 是一个 kube 应用程序,它向机器发送命令进行处理。
当 SparkContext 关闭时,这是否意味着我连接到 Spark 集群的桥已关闭?而且,如果是这样的话,我怎样才能使连接到 spark 集群的桥断开?
在此示例中,它发生在我尝试将数据上传到 S3 时。
这是代码
val myfiles = readParquet(
startDate=ew LocalDate(2020, 4, 1),
endDate=ew LocalDate(2020, 4, 7)
)
log_events.createOrReplaceTempView("log_events")
val mySQLDF = spark.sql(s"""
select [6 columns]
from myfiles
join [other table]
on [join_condition]
"""
)
mySQLDF.write.option("maxRecordsPerFile", 1000000).parquet(path)
// mySQLDF has 3M rows and they're all strings or dates
这是堆栈跟踪错误
org.apache.spark.SparkException: Job aborted.
at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:198)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:159)
at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:104)
at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:102)
at org.apache.spark.sql.execution.command.DataWritingCommandExec.doExecute(commands.scala:122)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute.apply(SparkPlan.scala:131)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute.apply(SparkPlan.scala:127)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery.apply(SparkPlan.scala:156)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:80)
at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:80)
at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand.apply(DataFrameWriter.scala:676)
at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand.apply(DataFrameWriter.scala:676)
at org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId.apply(SQLExecution.scala:78)
at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:125)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:73)
at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:676)
at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:285)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:271)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:229)
at org.apache.spark.sql.DataFrameWriter.parquet(DataFrameWriter.scala:566)
... 48 elided
Caused by: org.apache.spark.SparkException: Job 44 cancelled because SparkContext was shut down
at org.apache.spark.scheduler.DAGScheduler$$anonfun$cleanUpAfterSchedulerStop.apply(DAGScheduler.scala:972)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$cleanUpAfterSchedulerStop.apply(DAGScheduler.scala:970)
at scala.collection.mutable.HashSet.foreach(HashSet.scala:78)
at org.apache.spark.scheduler.DAGScheduler.cleanUpAfterSchedulerStop(DAGScheduler.scala:970)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onStop(DAGScheduler.scala:2286)
at org.apache.spark.util.EventLoop.stop(EventLoop.scala:84)
at org.apache.spark.scheduler.DAGScheduler.stop(DAGScheduler.scala:2193)
at org.apache.spark.SparkContext$$anonfun$stop.apply$mcV$sp(SparkContext.scala:1949)
at org.apache.spark.util.Utils$.tryLogNonFatalError(Utils.scala:1340)
at org.apache.spark.SparkContext.stop(SparkContext.scala:1948)
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend$MonitorThread.run(YarnClientSchedulerBackend.scala:121)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:777)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2061)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:167)
... 70 more
您的作业在写入步骤中被中止。 Job aborted.
是导致 Spark 上下文关闭的异常消息。
考虑优化写入步骤,maxRecordsPerFile
可能是罪魁祸首;也许尝试一个较低的数字..你目前在一个文件中有 100 万条记录!
一般来说,Job ${job.jobId} cancelled because SparkContext was shut down
只是表示这是一个异常,DAG因此无法继续,需要Error out。它是 Spark scheduler throwing this error 当它面临异常时,它可能是您的代码中未处理的异常或由于任何其他原因导致的作业失败。当 DAG 调度程序停止时,整个应用程序将停止(此消息是清理的一部分)。
针对您的问题-
When a SparkContext shuts down, does that mean my bridge to the Spark cluster is down?
SparkContext 表示与 Spark 集群的连接,因此如果它已死,则意味着您无法 运行 运行 作业,因为您丢失了 link!在 Zepplin 上,您只需重新启动 SparkContext(菜单 -> 解释器 -> Spark 解释器 -> 重新启动)
And, if that's the case, how can I cause the bridge to the spark cluster to go down?
在作业中使用 SparkException/Error 或手动使用 sc.stop()
我正在使用共享的 Apache Zeppelin 服务器。几乎每天,我都会尝试 运行 一个命令并得到这个错误:Job 65 cancelled because SparkContext was shut down
我很想了解有关导致 SparkContext 关闭的原因的更多信息。我的理解是 Zeppelin 是一个 kube 应用程序,它向机器发送命令进行处理。
当 SparkContext 关闭时,这是否意味着我连接到 Spark 集群的桥已关闭?而且,如果是这样的话,我怎样才能使连接到 spark 集群的桥断开?
在此示例中,它发生在我尝试将数据上传到 S3 时。
这是代码
val myfiles = readParquet(
startDate=ew LocalDate(2020, 4, 1),
endDate=ew LocalDate(2020, 4, 7)
)
log_events.createOrReplaceTempView("log_events")
val mySQLDF = spark.sql(s"""
select [6 columns]
from myfiles
join [other table]
on [join_condition]
"""
)
mySQLDF.write.option("maxRecordsPerFile", 1000000).parquet(path)
// mySQLDF has 3M rows and they're all strings or dates
这是堆栈跟踪错误
org.apache.spark.SparkException: Job aborted.
at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:198)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:159)
at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:104)
at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:102)
at org.apache.spark.sql.execution.command.DataWritingCommandExec.doExecute(commands.scala:122)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute.apply(SparkPlan.scala:131)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute.apply(SparkPlan.scala:127)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery.apply(SparkPlan.scala:156)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:80)
at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:80)
at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand.apply(DataFrameWriter.scala:676)
at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand.apply(DataFrameWriter.scala:676)
at org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId.apply(SQLExecution.scala:78)
at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:125)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:73)
at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:676)
at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:285)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:271)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:229)
at org.apache.spark.sql.DataFrameWriter.parquet(DataFrameWriter.scala:566)
... 48 elided
Caused by: org.apache.spark.SparkException: Job 44 cancelled because SparkContext was shut down
at org.apache.spark.scheduler.DAGScheduler$$anonfun$cleanUpAfterSchedulerStop.apply(DAGScheduler.scala:972)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$cleanUpAfterSchedulerStop.apply(DAGScheduler.scala:970)
at scala.collection.mutable.HashSet.foreach(HashSet.scala:78)
at org.apache.spark.scheduler.DAGScheduler.cleanUpAfterSchedulerStop(DAGScheduler.scala:970)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onStop(DAGScheduler.scala:2286)
at org.apache.spark.util.EventLoop.stop(EventLoop.scala:84)
at org.apache.spark.scheduler.DAGScheduler.stop(DAGScheduler.scala:2193)
at org.apache.spark.SparkContext$$anonfun$stop.apply$mcV$sp(SparkContext.scala:1949)
at org.apache.spark.util.Utils$.tryLogNonFatalError(Utils.scala:1340)
at org.apache.spark.SparkContext.stop(SparkContext.scala:1948)
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend$MonitorThread.run(YarnClientSchedulerBackend.scala:121)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:777)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2061)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:167)
... 70 more
您的作业在写入步骤中被中止。 Job aborted.
是导致 Spark 上下文关闭的异常消息。
考虑优化写入步骤,maxRecordsPerFile
可能是罪魁祸首;也许尝试一个较低的数字..你目前在一个文件中有 100 万条记录!
一般来说,Job ${job.jobId} cancelled because SparkContext was shut down
只是表示这是一个异常,DAG因此无法继续,需要Error out。它是 Spark scheduler throwing this error 当它面临异常时,它可能是您的代码中未处理的异常或由于任何其他原因导致的作业失败。当 DAG 调度程序停止时,整个应用程序将停止(此消息是清理的一部分)。
针对您的问题-
When a SparkContext shuts down, does that mean my bridge to the Spark cluster is down?
SparkContext 表示与 Spark 集群的连接,因此如果它已死,则意味着您无法 运行 运行 作业,因为您丢失了 link!在 Zepplin 上,您只需重新启动 SparkContext(菜单 -> 解释器 -> Spark 解释器 -> 重新启动)
And, if that's the case, how can I cause the bridge to the spark cluster to go down?
在作业中使用 SparkException/Error 或手动使用 sc.stop()