Spark 错误执行器:阶段 0.0 (tid 0) 中的任务 0.0 出现异常 java.lang.ArithmeticException

Spark ERROR executor: Exception in task 0.0 in stage 0.0 (tid 0) java.lang.ArithmeticException

当我使用 Cassandra 3.11.9 和 Spark 3.0.1 运行 一个应用程序 Java Web 时,我得到了下面的错误。

我的问题是为什么它只在部署应用程序后发生?在开发环境中没有出现。

2021-03-24 08:50:41.150 INFO 19613 --- [uler-event-loop] org.apache.spark.scheduler.DAGScheduler : ShuffleMapStage 0 (collectAsList at FalhaService.java:60) failed in 7.513 s due to Job aborted due to stage failure: Task 0 in stage 0.0 failed 1 times, most recent failure: Lost task 0.0 in stage 0.0 (TID 0) (GDBHML08 executor driver): java.lang.ArithmeticException: integer overflow at java.lang.Math.toIntExact(Math.java:1011) at org.apache.spark.sql.catalyst.util.DateTimeUtils$.fromJavaDate(DateTimeUtils.scala:90) at org.apache.spark.sql.catalyst.CatalystTypeConverters$DateConverter$.toCatalystImpl(CatalystTypeConverters.scala:306) at org.apache.spark.sql.catalyst.CatalystTypeConverters$DateConverter$.toCatalystImpl(CatalystTypeConverters.scala:305) at org.apache.spark.sql.catalyst.CatalystTypeConverters$CatalystTypeConverter.toCatalyst(CatalystTypeConverters.scala:107) at org.apache.spark.sql.catalyst.CatalystTypeConverters$StructConverter.toCatalystImpl(CatalystTypeConverters.scala:252) at org.apache.spark.sql.catalyst.CatalystTypeConverters$StructConverter.toCatalystImpl(CatalystTypeConverters.scala:242) at org.apache.spark.sql.catalyst.CatalystTypeConverters$CatalystTypeConverter.toCatalyst(CatalystTypeConverters.scala:107) at org.apache.spark.sql.catalyst.CatalystTypeConverters$.$anonfun$createToCatalystConverter(CatalystTypeConverters.scala:426) at com.datastax.spark.connector.datasource.UnsafeRowReader.read(UnsafeRowReaderFactory.scala:34) at com.datastax.spark.connector.datasource.UnsafeRowReader.read(UnsafeRowReaderFactory.scala:21) at com.datastax.spark.connector.datasource.CassandraPartitionReaderBase.$anonfun$getIterator(CassandraScanPartitionReaderFactory.scala:110) at scala.collection.Iterator$$anon.next(Iterator.scala:461) at scala.collection.Iterator$$anon.next(Iterator.scala:496) at com.datastax.spark.connector.datasource.CassandraPartitionReaderBase.next(CassandraScanPartitionReaderFactory.scala:66) at org.apache.spark.sql.execution.datasources.v2.PartitionIterator.hasNext(DataSourceRDD.scala:79) at org.apache.spark.sql.execution.datasources.v2.MetricsIterator.hasNext(DataSourceRDD.scala:112) at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37) at scala.collection.Iterator$$anon.hasNext(Iterator.scala:460) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.agg_doAggregateWithKeys_0$(Unknown Source) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source) at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon.hasNext(WholeStageCodegenExec.scala:755) at scala.collection.Iterator$$anon.hasNext(Iterator.scala:460) at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:132) at org.apache.spark.shuffle.ShuffleWriteProcessor.write(ShuffleWriteProcessor.scala:59) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:52) at org.apache.spark.scheduler.Task.run(Task.scala:131) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run(Executor.scala:497) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1439) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:500) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748)

Driver stacktrace: 2021-03-24 08:50:41.189 INFO 19613 --- [nio-8080-exec-2] org.apache.spark.scheduler.DAGScheduler : Job 0 failed: collectAsList at FalhaService.java:60, took 8.160348 s

出现此错误的行代码:

List<Row> rows = dataset.collectAsList();

代码块:

Dataset<Row> dataset = session.sql(sql.toString());

List<Row> rows = dataset.collectAsList();
ListIterator<Row> t = rows.listIterator();
while (t.hasNext()) {
    Row row = t.next();
    grafico = new EstGraficoRelEstTela();
    grafico.setSuperficie(row.getLong(0));
    grafico.setSubsea(row.getLong(1) + row.getLong(2));
    grafico.setNomeTipoSensor(row.getString(3));
    graficoLocalFalhas.add(grafico);
}
session.close();

谢谢,

您的数据库中似乎有不正确的数据,某些日期字段是遥远的未来。如果你查看source code,你会发现它首先转换为毫秒,然后转换为天,并且这种转换溢出了整数。这可以解释为什么代码在开发环境中工作...

您可以要求您的管理员检查文件中是否有损坏的数据,例如,使用 nodetool scrub 命令。

P.S。您确定您使用的是 Spark 3.0.1 吗?错误中函数的位置匹配Spark 3.1.1...