NPE 与 Joda DateTime 产生火花
NPE in spark with Joda DateTime
在 joda DateTime 字段上的 spark 中执行简单映射时,我收到 NullPointerException。
代码片段:
val me1 = (accountId, DateTime.now())
val me2 = (accountId, DateTime.now())
val me3 = (accountId, DateTime.now())
val rdd = spark.parallelize(List(me1, me2, me3))
val result = rdd.map{case (a,d) => (a,d.dayOfMonth().roundFloorCopy())}.collect.toList
堆栈跟踪:
java.lang.NullPointerException
at org.joda.time.DateTime$Property.roundFloorCopy(DateTime.java:2280)
at x.y.z.jobs.info.AggJobTest$$anonfun$$anonfun.apply(AggJobTest.scala:47)
at x.y.z.jobs.info.AggJobTest$$anonfun$$anonfun.apply(AggJobTest.scala:47)
at scala.collection.Iterator$$anon.next(Iterator.scala:328)
at scala.collection.Iterator$class.foreach(Iterator.scala:727)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273)
at scala.collection.AbstractIterator.to(Iterator.scala:1157)
at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)
at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252)
at scala.collection.AbstractIterator.toArray(Iterator.scala:1157)
at org.apache.spark.rdd.RDD$$anonfun.apply(RDD.scala:780)
at org.apache.spark.rdd.RDD$$anonfun.apply(RDD.scala:780)
at org.apache.spark.SparkContext$$anonfun$runJob.apply(SparkContext.scala:1314)
at org.apache.spark.SparkContext$$anonfun$runJob.apply(SparkContext.scala:1314)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:61)
at org.apache.spark.scheduler.Task.run(Task.scala:56)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:196)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
有什么解决这个问题的建议吗?
更新:
为了重现您需要使用 KryoSerializer 的问题:
.set("spark.serializer",
"org.apache.spark.serializer.KryoSerializer")
正如您所指出的,您正在将 KryoSerializer 与 Joda DateTime 对象一起使用。似乎序列化遗漏了一些必需的信息,您可能希望查看使用其中一个项目,该项目为 Kryo 添加了对 Joda DateTime 对象的支持。例如 https://github.com/magro/kryo-serializers 提供了一个名为 JodaDateTimeSerializer
的序列化程序,您可以使用 kryo.register( DateTime.class, new JodaDateTimeSerializer() );
注册它
在 joda DateTime 字段上的 spark 中执行简单映射时,我收到 NullPointerException。
代码片段:
val me1 = (accountId, DateTime.now())
val me2 = (accountId, DateTime.now())
val me3 = (accountId, DateTime.now())
val rdd = spark.parallelize(List(me1, me2, me3))
val result = rdd.map{case (a,d) => (a,d.dayOfMonth().roundFloorCopy())}.collect.toList
堆栈跟踪:
java.lang.NullPointerException
at org.joda.time.DateTime$Property.roundFloorCopy(DateTime.java:2280)
at x.y.z.jobs.info.AggJobTest$$anonfun$$anonfun.apply(AggJobTest.scala:47)
at x.y.z.jobs.info.AggJobTest$$anonfun$$anonfun.apply(AggJobTest.scala:47)
at scala.collection.Iterator$$anon.next(Iterator.scala:328)
at scala.collection.Iterator$class.foreach(Iterator.scala:727)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273)
at scala.collection.AbstractIterator.to(Iterator.scala:1157)
at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)
at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252)
at scala.collection.AbstractIterator.toArray(Iterator.scala:1157)
at org.apache.spark.rdd.RDD$$anonfun.apply(RDD.scala:780)
at org.apache.spark.rdd.RDD$$anonfun.apply(RDD.scala:780)
at org.apache.spark.SparkContext$$anonfun$runJob.apply(SparkContext.scala:1314)
at org.apache.spark.SparkContext$$anonfun$runJob.apply(SparkContext.scala:1314)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:61)
at org.apache.spark.scheduler.Task.run(Task.scala:56)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:196)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
有什么解决这个问题的建议吗?
更新: 为了重现您需要使用 KryoSerializer 的问题:
.set("spark.serializer", "org.apache.spark.serializer.KryoSerializer")
正如您所指出的,您正在将 KryoSerializer 与 Joda DateTime 对象一起使用。似乎序列化遗漏了一些必需的信息,您可能希望查看使用其中一个项目,该项目为 Kryo 添加了对 Joda DateTime 对象的支持。例如 https://github.com/magro/kryo-serializers 提供了一个名为 JodaDateTimeSerializer
的序列化程序,您可以使用 kryo.register( DateTime.class, new JodaDateTimeSerializer() );