无法找到 class:org.apache.spark.h2o.package$StringHolder
Unable to find class: org.apache.spark.h2o.package$StringHolder
我正在尝试简单的 droplet https://github.com/h2oai/sparkling-water
程序,但我无法使用 spark-submit 使其 运行 成功。
我使用的是苏打水 1.6.4,如示例代码中所用。
spark-submit --jars sparkling-water-assembly-1.6.4-all.jar swtest_2.10-1.0.jar
我没有使用示例代码中提供的gradel方式。我只是使用了非常简单的 sbt 构建。
name := "SWTest"
version := "1.0"
scalaVersion := "2.10.4"
libraryDependencies += "ai.h2o" % "sparkling-water-core_2.10" % "1.6.4"
libraryDependencies += "ai.h2o" % "sparkling-water-examples_2.10" % "1.6.4"
程序 运行 没问题,直到达到:
val trainRDD = h2oContext.asRDD[StringHolder](irisData('class))
val predictRDD = h2oContext.asRDD[StringHolder](predict)
val numMispredictions = trainRDD.zip(predictRDD).filter( i => {
val act = i._1
val pred = i._2
act.result != pred.result
}).collect()
It looks like the as.RDD needs a generic type, and here is "StringHolder"
但是报错“Unable to find class: org.apache.spark.h2o.package$StringHolder”:
12-06 15:03:53.442 127.0.0.1:54321 489 FJ-1-3 INFO: Number of Trees Model Size in Bytes Min. Depth Max. Depth Mean Depth Min. Leaves Max. Leaves Mean Leaves
12-06 15:03:53.442 127.0.0.1:54321 489 FJ-1-3 INFO: 15 2176 1 5 4.20000 2 9 7.20000
12-06 15:03:53.442 127.0.0.1:54321 489 FJ-1-3 INFO: Scoring History:
12-06 15:03:53.442 127.0.0.1:54321 489 FJ-1-3 INFO: Timestamp Duration Number of Trees Training MSE Training LogLoss Training Classification Error
12-06 15:03:53.442 127.0.0.1:54321 489 FJ-1-3 INFO: 2016-12-06 15:03:50 0.261 sec 0 0.44444 1.09861 0.64000
12-06 15:03:53.442 127.0.0.1:54321 489 FJ-1-3 INFO: 2016-12-06 15:03:51 1.607 sec 1 0.36474 0.92664 0.04000
12-06 15:03:53.442 127.0.0.1:54321 489 FJ-1-3 INFO: 2016-12-06 15:03:52 1.987 sec 2 0.29854 0.79143 0.04667
12-06 15:03:53.442 127.0.0.1:54321 489 FJ-1-3 INFO: 2016-12-06 15:03:52 2.364 sec 3 0.24482 0.68353 0.04667
12-06 15:03:53.442 127.0.0.1:54321 489 FJ-1-3 INFO: 2016-12-06 15:03:53 2.668 sec 4 0.20083 0.59453 0.04667
12-06 15:03:53.442 127.0.0.1:54321 489 FJ-1-3 INFO: 2016-12-06 15:03:53 3.007 sec 5 0.16523 0.52069 0.04667
gbm prediction
12-06 15:03:53.846 127.0.0.1:54321 489 main INFO: Confusion Matrix (vertical: actual; across: predicted):
12-06 15:03:53.846 127.0.0.1:54321 489 main INFO: Iris-setosa Iris-versicolor Iris-virginica Error Rate
12-06 15:03:53.846 127.0.0.1:54321 489 main INFO: Iris-setosa 50 0 0 0.0000 = 0 / 50
12-06 15:03:53.846 127.0.0.1:54321 489 main INFO: Iris-versicolor 0 48 2 0.0400 = 2 / 50
12-06 15:03:53.846 127.0.0.1:54321 489 main INFO: Iris-virginica 0 5 45 0.1000 = 5 / 50
12-06 15:03:53.846 127.0.0.1:54321 489 main INFO: Totals 50 53 47 0.0467 = 7 / 150
12-06 15:03:53.847 127.0.0.1:54321 489 main INFO: Top-3 Hit Ratios:
12-06 15:03:53.847 127.0.0.1:54321 489 main INFO: K Hit Ratio
12-06 15:03:53.847 127.0.0.1:54321 489 main INFO: 1 0.953333
12-06 15:03:53.847 127.0.0.1:54321 489 main INFO: 2 1.000000
12-06 15:03:53.847 127.0.0.1:54321 489 main INFO: 3 1.000000
computer number of mispredictions
computer number of mispredictions
16/12/06 15:03:55 ERROR TaskResultGetter: Exception while getting task result
com.esotericsoftware.kryo.KryoException: Unable to find class: org.apache.spark.h2o.package$StringHolder
at com.esotericsoftware.kryo.util.DefaultClassResolver.readName(DefaultClassResolver.java:138)
at com.esotericsoftware.kryo.util.DefaultClassResolver.readClass(DefaultClassResolver.java:115)
at com.esotericsoftware.kryo.Kryo.readClass(Kryo.java:610)
at com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:721)
at com.twitter.chill.Tuple2Serializer.read(TupleSerializers.scala:41)
at com.twitter.chill.Tuple2Serializer.read(TupleSerializers.scala:33)
at com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:729)
at com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ObjectArraySerializer.read(DefaultArraySerializers.java:338)
at com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ObjectArraySerializer.read(DefaultArraySerializers.java:293)
at com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:729)
at org.apache.spark.serializer.KryoSerializerInstance.deserialize(KryoSerializer.scala:311)
at org.apache.spark.scheduler.DirectTaskResult.value(TaskResult.scala:97)
at org.apache.spark.scheduler.TaskResultGetter$$anon$$anonfun$run.apply$mcV$sp(TaskResultGetter.scala:60)
at org.apache.spark.scheduler.TaskResultGetter$$anon$$anonfun$run.apply(TaskResultGetter.scala:51)
at org.apache.spark.scheduler.TaskResultGetter$$anon$$anonfun$run.apply(TaskResultGetter.scala:51)
at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1765)
at org.apache.spark.scheduler.TaskResultGetter$$anon.run(TaskResultGetter.scala:50)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.ClassNotFoundException: org.apache.spark.h2o.package$StringHolder
at java.lang.ClassLoader.findClass(ClassLoader.java:531)
at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
at org.apache.spark.repl.h2o.InterpreterClassLoader.loadClass(InterpreterClassLoader.scala:37)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:274)
at com.esotericsoftware.kryo.util.DefaultClassResolver.readName(DefaultClassResolver.java:136)
... 19 more
Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: Exception while getting task result: com.esotericsoftware.kryo.KryoException: Unable to find class: org.apache.spark.h2o.package$StringHolder
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1431)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage.apply(DAGScheduler.scala:1419)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage.apply(DAGScheduler.scala:1418)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1418)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed.apply(DAGScheduler.scala:799)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed.apply(DAGScheduler.scala:799)
at scala.Option.foreach(Option.scala:236)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:799)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1640)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1599)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1588)
at org.apache.spark.util.EventLoop$$anon.run(EventLoop.scala:48)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:620)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1832)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1845)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1858)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1929)
at org.apache.spark.rdd.RDD$$anonfun$collect.apply(RDD.scala:927)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:316)
at org.apache.spark.rdd.RDD.collect(RDD.scala:926)
at swtest$.main(swtest.scala:68)
at swtest.main(swtest.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:735)
at org.apache.spark.deploy.SparkSubmit$.doRunMain(SparkSubmit.scala:181)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
我想我包括了 sparkling-water-assembly-1.6.4-all.jar,它应该包含所有内容。
有人会提出任何想法吗?
感谢报告。
您确实发现了苏打水的错误。修复已在此处 https://github.com/h2oai/sparkling-water/pull/151 并将进入下一个版本。
与此同时,简单的解决方法是在创建 sparkContext
之前在 sparkConf
上设置 conf.set("spark.ext.h2o.repl.enabled","false")
,正如 Mateusz 指出的那样(如果您不 运行来自 Flow UI )
的 Scala 代码
我正在尝试简单的 droplet https://github.com/h2oai/sparkling-water
程序,但我无法使用 spark-submit 使其 运行 成功。
我使用的是苏打水 1.6.4,如示例代码中所用。
spark-submit --jars sparkling-water-assembly-1.6.4-all.jar swtest_2.10-1.0.jar
我没有使用示例代码中提供的gradel方式。我只是使用了非常简单的 sbt 构建。
name := "SWTest"
version := "1.0"
scalaVersion := "2.10.4"
libraryDependencies += "ai.h2o" % "sparkling-water-core_2.10" % "1.6.4"
libraryDependencies += "ai.h2o" % "sparkling-water-examples_2.10" % "1.6.4"
程序 运行 没问题,直到达到:
val trainRDD = h2oContext.asRDD[StringHolder](irisData('class))
val predictRDD = h2oContext.asRDD[StringHolder](predict)
val numMispredictions = trainRDD.zip(predictRDD).filter( i => {
val act = i._1
val pred = i._2
act.result != pred.result
}).collect()
It looks like the as.RDD needs a generic type, and here is "StringHolder"
但是报错“Unable to find class: org.apache.spark.h2o.package$StringHolder”:
12-06 15:03:53.442 127.0.0.1:54321 489 FJ-1-3 INFO: Number of Trees Model Size in Bytes Min. Depth Max. Depth Mean Depth Min. Leaves Max. Leaves Mean Leaves
12-06 15:03:53.442 127.0.0.1:54321 489 FJ-1-3 INFO: 15 2176 1 5 4.20000 2 9 7.20000
12-06 15:03:53.442 127.0.0.1:54321 489 FJ-1-3 INFO: Scoring History:
12-06 15:03:53.442 127.0.0.1:54321 489 FJ-1-3 INFO: Timestamp Duration Number of Trees Training MSE Training LogLoss Training Classification Error
12-06 15:03:53.442 127.0.0.1:54321 489 FJ-1-3 INFO: 2016-12-06 15:03:50 0.261 sec 0 0.44444 1.09861 0.64000
12-06 15:03:53.442 127.0.0.1:54321 489 FJ-1-3 INFO: 2016-12-06 15:03:51 1.607 sec 1 0.36474 0.92664 0.04000
12-06 15:03:53.442 127.0.0.1:54321 489 FJ-1-3 INFO: 2016-12-06 15:03:52 1.987 sec 2 0.29854 0.79143 0.04667
12-06 15:03:53.442 127.0.0.1:54321 489 FJ-1-3 INFO: 2016-12-06 15:03:52 2.364 sec 3 0.24482 0.68353 0.04667
12-06 15:03:53.442 127.0.0.1:54321 489 FJ-1-3 INFO: 2016-12-06 15:03:53 2.668 sec 4 0.20083 0.59453 0.04667
12-06 15:03:53.442 127.0.0.1:54321 489 FJ-1-3 INFO: 2016-12-06 15:03:53 3.007 sec 5 0.16523 0.52069 0.04667
gbm prediction
12-06 15:03:53.846 127.0.0.1:54321 489 main INFO: Confusion Matrix (vertical: actual; across: predicted):
12-06 15:03:53.846 127.0.0.1:54321 489 main INFO: Iris-setosa Iris-versicolor Iris-virginica Error Rate
12-06 15:03:53.846 127.0.0.1:54321 489 main INFO: Iris-setosa 50 0 0 0.0000 = 0 / 50
12-06 15:03:53.846 127.0.0.1:54321 489 main INFO: Iris-versicolor 0 48 2 0.0400 = 2 / 50
12-06 15:03:53.846 127.0.0.1:54321 489 main INFO: Iris-virginica 0 5 45 0.1000 = 5 / 50
12-06 15:03:53.846 127.0.0.1:54321 489 main INFO: Totals 50 53 47 0.0467 = 7 / 150
12-06 15:03:53.847 127.0.0.1:54321 489 main INFO: Top-3 Hit Ratios:
12-06 15:03:53.847 127.0.0.1:54321 489 main INFO: K Hit Ratio
12-06 15:03:53.847 127.0.0.1:54321 489 main INFO: 1 0.953333
12-06 15:03:53.847 127.0.0.1:54321 489 main INFO: 2 1.000000
12-06 15:03:53.847 127.0.0.1:54321 489 main INFO: 3 1.000000
computer number of mispredictions
computer number of mispredictions
16/12/06 15:03:55 ERROR TaskResultGetter: Exception while getting task result
com.esotericsoftware.kryo.KryoException: Unable to find class: org.apache.spark.h2o.package$StringHolder
at com.esotericsoftware.kryo.util.DefaultClassResolver.readName(DefaultClassResolver.java:138)
at com.esotericsoftware.kryo.util.DefaultClassResolver.readClass(DefaultClassResolver.java:115)
at com.esotericsoftware.kryo.Kryo.readClass(Kryo.java:610)
at com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:721)
at com.twitter.chill.Tuple2Serializer.read(TupleSerializers.scala:41)
at com.twitter.chill.Tuple2Serializer.read(TupleSerializers.scala:33)
at com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:729)
at com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ObjectArraySerializer.read(DefaultArraySerializers.java:338)
at com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ObjectArraySerializer.read(DefaultArraySerializers.java:293)
at com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:729)
at org.apache.spark.serializer.KryoSerializerInstance.deserialize(KryoSerializer.scala:311)
at org.apache.spark.scheduler.DirectTaskResult.value(TaskResult.scala:97)
at org.apache.spark.scheduler.TaskResultGetter$$anon$$anonfun$run.apply$mcV$sp(TaskResultGetter.scala:60)
at org.apache.spark.scheduler.TaskResultGetter$$anon$$anonfun$run.apply(TaskResultGetter.scala:51)
at org.apache.spark.scheduler.TaskResultGetter$$anon$$anonfun$run.apply(TaskResultGetter.scala:51)
at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1765)
at org.apache.spark.scheduler.TaskResultGetter$$anon.run(TaskResultGetter.scala:50)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.ClassNotFoundException: org.apache.spark.h2o.package$StringHolder
at java.lang.ClassLoader.findClass(ClassLoader.java:531)
at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
at org.apache.spark.repl.h2o.InterpreterClassLoader.loadClass(InterpreterClassLoader.scala:37)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:274)
at com.esotericsoftware.kryo.util.DefaultClassResolver.readName(DefaultClassResolver.java:136)
... 19 more
Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: Exception while getting task result: com.esotericsoftware.kryo.KryoException: Unable to find class: org.apache.spark.h2o.package$StringHolder
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1431)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage.apply(DAGScheduler.scala:1419)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage.apply(DAGScheduler.scala:1418)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1418)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed.apply(DAGScheduler.scala:799)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed.apply(DAGScheduler.scala:799)
at scala.Option.foreach(Option.scala:236)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:799)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1640)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1599)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1588)
at org.apache.spark.util.EventLoop$$anon.run(EventLoop.scala:48)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:620)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1832)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1845)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1858)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1929)
at org.apache.spark.rdd.RDD$$anonfun$collect.apply(RDD.scala:927)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:316)
at org.apache.spark.rdd.RDD.collect(RDD.scala:926)
at swtest$.main(swtest.scala:68)
at swtest.main(swtest.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:735)
at org.apache.spark.deploy.SparkSubmit$.doRunMain(SparkSubmit.scala:181)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
我想我包括了 sparkling-water-assembly-1.6.4-all.jar,它应该包含所有内容。
有人会提出任何想法吗?
感谢报告。
您确实发现了苏打水的错误。修复已在此处 https://github.com/h2oai/sparkling-water/pull/151 并将进入下一个版本。
与此同时,简单的解决方法是在创建 sparkContext
之前在 sparkConf
上设置 conf.set("spark.ext.h2o.repl.enabled","false")
,正如 Mateusz 指出的那样(如果您不 运行来自 Flow UI )