Apache spark MultilayerPerceptronClassifier 因 ArrayIndexOutOfBoundsException 而失败
Apache spark MultilayerPerceptronClassifier fails with ArrayIndexOutOfBoundsException
我正在使用此代码尝试预测:
import org.apache.spark.sql.functions.col
import org.apache.spark.Logging
import org.apache.spark.graphx._
import org.apache.spark.{ SparkConf, SparkContext }
import org.apache.spark.SparkContext._
import org.apache.spark.sql.SQLContext._
import org.apache.log4j.Logger
import org.apache.log4j.Level
import org.apache.spark.sql.functions.col
import org.apache.spark.ml.feature.VectorAssembler
object NN extends App {
Logger.getLogger("org").setLevel(Level.OFF)
Logger.getLogger("akka").setLevel(Level.OFF)
val sc = new SparkContext(new SparkConf().setMaster("local[2]")
.setAppName("cs"))
val sqlContext = new org.apache.spark.sql.SQLContext(sc)
import sqlContext.implicits._
val df = sc.parallelize(Seq(
("3", "1", "1"),
("2", "1", "1"),
("2", "3", "3"),
("3", "3", "3"),
("0", "1", "0")))
.toDF("label", "feature1", "feature2")
val numeric = df
.select(df.columns.map(c => col(c).cast("double").alias(c)): _*)
val assembler = new VectorAssembler()
.setInputCols(Array("feature1", "feature2"))
.setOutputCol("features")
val data = assembler.transform(numeric)
import org.apache.spark.ml.classification.MultilayerPerceptronClassifier
val layers = Array[Int](2, 3, 5, 4) // Note 2 neurons in the input layer
val trainer = new MultilayerPerceptronClassifier()
.setLayers(layers)
.setBlockSize(128)
.setSeed(1234L)
.setMaxIter(100)
val model = trainer.fit(data)
model.transform(data).show
}
如果我使用数据框 (df)
("4", "1", "1")
而不是 ("3", "1", "1")
我收到错误:
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; support was removed in 8.0
[info] Set current project to spark-applications1458853926-master (in build file:/C:/Users/Desktop/spark-applications1458853926-master/)
[info] Compiling 1 Scala source to C:\Users\Desktop\spark-applications1458853926-master\target\scala-2.11\classes...
[info] Running NN
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
16/04/06 12:42:11 INFO Remoting: Starting remoting
16/04/06 12:42:11 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriverActorSystem@10.95.132.202:64056]
[error] (run-main-0) org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 1 times, most recent failure: Lost task 0.0 in stage 0.0 (TID 0, localhost): java.lang.ArrayIndexOutOfBoundsException: 4
[error] at org.apache.spark.ml.classification.LabelConverter$.encodeLabeledPoint(MultilayerPerceptronClassifier.scala:85)
[error] at org.apache.spark.ml.classification.MultilayerPerceptronClassifier$$anonfun.apply(MultilayerPerceptronClassifier.scala:165)
[error] at org.apache.spark.ml.classification.MultilayerPerceptronClassifier$$anonfun.apply(MultilayerPerceptronClassifier.scala:165)
[error] at scala.collection.Iterator$$anon.next(Iterator.scala:370)
[error] at scala.collection.Iterator$GroupedIterator.takeDestructively(Iterator.scala:934)
[error] at scala.collection.Iterator$GroupedIterator.go(Iterator.scala:949)
[error] at scala.collection.Iterator$GroupedIterator.fill(Iterator.scala:986)
[error] at scala.collection.Iterator$GroupedIterator.hasNext(Iterator.scala:990)
[error] at scala.collection.Iterator$$anon.hasNext(Iterator.scala:369)
[error] at org.apache.spark.util.Utils$.getIteratorSize(Utils.scala:1595)
[error] at org.apache.spark.rdd.RDD$$anonfun$count.apply(RDD.scala:1143)
[error] at org.apache.spark.rdd.RDD$$anonfun$count.apply(RDD.scala:1143)
[error] at org.apache.spark.SparkContext$$anonfun$runJob.apply(SparkContext.scala:1858)
[error] at org.apache.spark.SparkContext$$anonfun$runJob.apply(SparkContext.scala:1858)
[error] at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
[error] at org.apache.spark.scheduler.Task.run(Task.scala:89)
[error] at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
[error] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
[error] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
[error] at java.lang.Thread.run(Thread.java:745)
[error]
[error] Driver stacktrace:
org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 1 times, most recent failure: Lost task 0.0 in stage 0.0 (TID 0, localhost): java.lang.ArrayIndexOutOfBoundsException: 4
at org.apache.spark.ml.classification.LabelConverter$.encodeLabeledPoint(MultilayerPerceptronClassifier.scala:85)
at org.apache.spark.ml.classification.MultilayerPerceptronClassifier$$anonfun.apply(MultilayerPerceptronClassifier.scala:165)
at org.apache.spark.ml.classification.MultilayerPerceptronClassifier$$anonfun.apply(MultilayerPerceptronClassifier.scala:165)
at scala.collection.Iterator$$anon.next(Iterator.scala:370)
at scala.collection.Iterator$GroupedIterator.takeDestructively(Iterator.scala:934)
at scala.collection.Iterator$GroupedIterator.go(Iterator.scala:949)
at scala.collection.Iterator$GroupedIterator.fill(Iterator.scala:986)
at scala.collection.Iterator$GroupedIterator.hasNext(Iterator.scala:990)
at scala.collection.Iterator$$anon.hasNext(Iterator.scala:369)
at org.apache.spark.util.Utils$.getIteratorSize(Utils.scala:1595)
at org.apache.spark.rdd.RDD$$anonfun$count.apply(RDD.scala:1143)
at org.apache.spark.rdd.RDD$$anonfun$count.apply(RDD.scala:1143)
at org.apache.spark.SparkContext$$anonfun$runJob.apply(SparkContext.scala:1858)
at org.apache.spark.SparkContext$$anonfun$runJob.apply(SparkContext.scala:1858)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1431)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage.apply(DAGScheduler.scala:1419)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage.apply(DAGScheduler.scala:1418)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1418)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed.apply(DAGScheduler.scala:799)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed.apply(DAGScheduler.scala:799)
at scala.Option.foreach(Option.scala:257)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:799)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1640)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1599)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1588)
at org.apache.spark.util.EventLoop$$anon.run(EventLoop.scala:48)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:620)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1832)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1845)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1858)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1929)
at org.apache.spark.rdd.RDD.count(RDD.scala:1143)
at org.apache.spark.mllib.optimization.LBFGS$.runLBFGS(LBFGS.scala:170)
at org.apache.spark.mllib.optimization.LBFGS.optimize(LBFGS.scala:117)
at org.apache.spark.ml.ann.FeedForwardTrainer.train(Layer.scala:878)
at org.apache.spark.ml.classification.MultilayerPerceptronClassifier.train(MultilayerPerceptronClassifier.scala:170)
at org.apache.spark.ml.classification.MultilayerPerceptronClassifier.train(MultilayerPerceptronClassifier.scala:110)
at org.apache.spark.ml.Predictor.fit(Predictor.scala:90)
at NN$.delayedEndpoint$NN(NN.scala:56)
at NN$delayedInit$body.apply(NN.scala:15)
at scala.Function0$class.apply$mcV$sp(Function0.scala:34)
at scala.runtime.AbstractFunction0.apply$mcV$sp(AbstractFunction0.scala:12)
at scala.App$$anonfun$main.apply(App.scala:76)
at scala.App$$anonfun$main.apply(App.scala:76)
at scala.collection.immutable.List.foreach(List.scala:381)
at scala.collection.generic.TraversableForwarder$class.foreach(TraversableForwarder.scala:35)
at scala.App$class.main(App.scala:76)
at NN$.main(NN.scala:15)
at NN.main(NN.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
Caused by: java.lang.ArrayIndexOutOfBoundsException: 4
at org.apache.spark.ml.classification.LabelConverter$.encodeLabeledPoint(MultilayerPerceptronClassifier.scala:85)
at org.apache.spark.ml.classification.MultilayerPerceptronClassifier$$anonfun.apply(MultilayerPerceptronClassifier.scala:165)
at org.apache.spark.ml.classification.MultilayerPerceptronClassifier$$anonfun.apply(MultilayerPerceptronClassifier.scala:165)
at scala.collection.Iterator$$anon.next(Iterator.scala:370)
at scala.collection.Iterator$GroupedIterator.takeDestructively(Iterator.scala:934)
at scala.collection.Iterator$GroupedIterator.go(Iterator.scala:949)
at scala.collection.Iterator$GroupedIterator.fill(Iterator.scala:986)
at scala.collection.Iterator$GroupedIterator.hasNext(Iterator.scala:990)
at scala.collection.Iterator$$anon.hasNext(Iterator.scala:369)
at org.apache.spark.util.Utils$.getIteratorSize(Utils.scala:1595)
at org.apache.spark.rdd.RDD$$anonfun$count.apply(RDD.scala:1143)
at org.apache.spark.rdd.RDD$$anonfun$count.apply(RDD.scala:1143)
at org.apache.spark.SparkContext$$anonfun$runJob.apply(SparkContext.scala:1858)
at org.apache.spark.SparkContext$$anonfun$runJob.apply(SparkContext.scala:1858)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
[trace] Stack trace suppressed: run last compile:run for the full output.
java.lang.RuntimeException: Nonzero exit code: 1
at scala.sys.package$.error(package.scala:27)
[trace] Stack trace suppressed: run last compile:run for the full output.
[error] (compile:run) Nonzero exit code: 1
[error] Total time: 19 s, completed 06-Apr-2016 12:42:20
为什么我收到 ArrayIndexOutOfBoundsException,我没有正确设置标签?标签只是标签就不能取值吗?在此示例中,它们似乎必须在 0-3 范围内?
出现最后一层(我正在自己编写我的第一个严肃示例)来表示四舍五入为 int 值的标签,因此通过将 4 声明为该值,您期望标签为 0,1,2,3 -
显然,该代码旨在创建一个神经网络,以根据输入将输出分类为一系列状态 -
我正在尝试确定如何使用此功能编写井字游戏播放器
输出层使用one-hot encoding; that is, a label of "3" is converted to (0,0,0,1) where the 'third' element is 1 and the rest are 0. When you have 4 output nodes and a label of 4, the LabelConverter function (whose source is visible here)会失败。 (labelCount
是 4,labeledPoint.label.toInt
是 4,因此你的错误。)
val output = Array.fill(labelCount)(0.0)
output(labeledPoint.label.toInt) = 1.0
(labeledPoint.features, Vectors.dense(output))
因此更改此行:
val layers = Array[Int](2, 3, 5, 4) // Note 2 neurons in the input layer
对此:
val layers = Array[Int](2, 3, 5, 5) // Note 2 neurons in the input layer and 5 neurons in the output layer
我希望它能起作用。
我正在使用此代码尝试预测:
import org.apache.spark.sql.functions.col
import org.apache.spark.Logging
import org.apache.spark.graphx._
import org.apache.spark.{ SparkConf, SparkContext }
import org.apache.spark.SparkContext._
import org.apache.spark.sql.SQLContext._
import org.apache.log4j.Logger
import org.apache.log4j.Level
import org.apache.spark.sql.functions.col
import org.apache.spark.ml.feature.VectorAssembler
object NN extends App {
Logger.getLogger("org").setLevel(Level.OFF)
Logger.getLogger("akka").setLevel(Level.OFF)
val sc = new SparkContext(new SparkConf().setMaster("local[2]")
.setAppName("cs"))
val sqlContext = new org.apache.spark.sql.SQLContext(sc)
import sqlContext.implicits._
val df = sc.parallelize(Seq(
("3", "1", "1"),
("2", "1", "1"),
("2", "3", "3"),
("3", "3", "3"),
("0", "1", "0")))
.toDF("label", "feature1", "feature2")
val numeric = df
.select(df.columns.map(c => col(c).cast("double").alias(c)): _*)
val assembler = new VectorAssembler()
.setInputCols(Array("feature1", "feature2"))
.setOutputCol("features")
val data = assembler.transform(numeric)
import org.apache.spark.ml.classification.MultilayerPerceptronClassifier
val layers = Array[Int](2, 3, 5, 4) // Note 2 neurons in the input layer
val trainer = new MultilayerPerceptronClassifier()
.setLayers(layers)
.setBlockSize(128)
.setSeed(1234L)
.setMaxIter(100)
val model = trainer.fit(data)
model.transform(data).show
}
如果我使用数据框 (df)
("4", "1", "1")
而不是 ("3", "1", "1")
我收到错误:
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; support was removed in 8.0
[info] Set current project to spark-applications1458853926-master (in build file:/C:/Users/Desktop/spark-applications1458853926-master/)
[info] Compiling 1 Scala source to C:\Users\Desktop\spark-applications1458853926-master\target\scala-2.11\classes...
[info] Running NN
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
16/04/06 12:42:11 INFO Remoting: Starting remoting
16/04/06 12:42:11 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriverActorSystem@10.95.132.202:64056]
[error] (run-main-0) org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 1 times, most recent failure: Lost task 0.0 in stage 0.0 (TID 0, localhost): java.lang.ArrayIndexOutOfBoundsException: 4
[error] at org.apache.spark.ml.classification.LabelConverter$.encodeLabeledPoint(MultilayerPerceptronClassifier.scala:85)
[error] at org.apache.spark.ml.classification.MultilayerPerceptronClassifier$$anonfun.apply(MultilayerPerceptronClassifier.scala:165)
[error] at org.apache.spark.ml.classification.MultilayerPerceptronClassifier$$anonfun.apply(MultilayerPerceptronClassifier.scala:165)
[error] at scala.collection.Iterator$$anon.next(Iterator.scala:370)
[error] at scala.collection.Iterator$GroupedIterator.takeDestructively(Iterator.scala:934)
[error] at scala.collection.Iterator$GroupedIterator.go(Iterator.scala:949)
[error] at scala.collection.Iterator$GroupedIterator.fill(Iterator.scala:986)
[error] at scala.collection.Iterator$GroupedIterator.hasNext(Iterator.scala:990)
[error] at scala.collection.Iterator$$anon.hasNext(Iterator.scala:369)
[error] at org.apache.spark.util.Utils$.getIteratorSize(Utils.scala:1595)
[error] at org.apache.spark.rdd.RDD$$anonfun$count.apply(RDD.scala:1143)
[error] at org.apache.spark.rdd.RDD$$anonfun$count.apply(RDD.scala:1143)
[error] at org.apache.spark.SparkContext$$anonfun$runJob.apply(SparkContext.scala:1858)
[error] at org.apache.spark.SparkContext$$anonfun$runJob.apply(SparkContext.scala:1858)
[error] at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
[error] at org.apache.spark.scheduler.Task.run(Task.scala:89)
[error] at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
[error] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
[error] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
[error] at java.lang.Thread.run(Thread.java:745)
[error]
[error] Driver stacktrace:
org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 1 times, most recent failure: Lost task 0.0 in stage 0.0 (TID 0, localhost): java.lang.ArrayIndexOutOfBoundsException: 4
at org.apache.spark.ml.classification.LabelConverter$.encodeLabeledPoint(MultilayerPerceptronClassifier.scala:85)
at org.apache.spark.ml.classification.MultilayerPerceptronClassifier$$anonfun.apply(MultilayerPerceptronClassifier.scala:165)
at org.apache.spark.ml.classification.MultilayerPerceptronClassifier$$anonfun.apply(MultilayerPerceptronClassifier.scala:165)
at scala.collection.Iterator$$anon.next(Iterator.scala:370)
at scala.collection.Iterator$GroupedIterator.takeDestructively(Iterator.scala:934)
at scala.collection.Iterator$GroupedIterator.go(Iterator.scala:949)
at scala.collection.Iterator$GroupedIterator.fill(Iterator.scala:986)
at scala.collection.Iterator$GroupedIterator.hasNext(Iterator.scala:990)
at scala.collection.Iterator$$anon.hasNext(Iterator.scala:369)
at org.apache.spark.util.Utils$.getIteratorSize(Utils.scala:1595)
at org.apache.spark.rdd.RDD$$anonfun$count.apply(RDD.scala:1143)
at org.apache.spark.rdd.RDD$$anonfun$count.apply(RDD.scala:1143)
at org.apache.spark.SparkContext$$anonfun$runJob.apply(SparkContext.scala:1858)
at org.apache.spark.SparkContext$$anonfun$runJob.apply(SparkContext.scala:1858)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1431)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage.apply(DAGScheduler.scala:1419)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage.apply(DAGScheduler.scala:1418)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1418)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed.apply(DAGScheduler.scala:799)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed.apply(DAGScheduler.scala:799)
at scala.Option.foreach(Option.scala:257)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:799)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1640)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1599)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1588)
at org.apache.spark.util.EventLoop$$anon.run(EventLoop.scala:48)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:620)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1832)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1845)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1858)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1929)
at org.apache.spark.rdd.RDD.count(RDD.scala:1143)
at org.apache.spark.mllib.optimization.LBFGS$.runLBFGS(LBFGS.scala:170)
at org.apache.spark.mllib.optimization.LBFGS.optimize(LBFGS.scala:117)
at org.apache.spark.ml.ann.FeedForwardTrainer.train(Layer.scala:878)
at org.apache.spark.ml.classification.MultilayerPerceptronClassifier.train(MultilayerPerceptronClassifier.scala:170)
at org.apache.spark.ml.classification.MultilayerPerceptronClassifier.train(MultilayerPerceptronClassifier.scala:110)
at org.apache.spark.ml.Predictor.fit(Predictor.scala:90)
at NN$.delayedEndpoint$NN(NN.scala:56)
at NN$delayedInit$body.apply(NN.scala:15)
at scala.Function0$class.apply$mcV$sp(Function0.scala:34)
at scala.runtime.AbstractFunction0.apply$mcV$sp(AbstractFunction0.scala:12)
at scala.App$$anonfun$main.apply(App.scala:76)
at scala.App$$anonfun$main.apply(App.scala:76)
at scala.collection.immutable.List.foreach(List.scala:381)
at scala.collection.generic.TraversableForwarder$class.foreach(TraversableForwarder.scala:35)
at scala.App$class.main(App.scala:76)
at NN$.main(NN.scala:15)
at NN.main(NN.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
Caused by: java.lang.ArrayIndexOutOfBoundsException: 4
at org.apache.spark.ml.classification.LabelConverter$.encodeLabeledPoint(MultilayerPerceptronClassifier.scala:85)
at org.apache.spark.ml.classification.MultilayerPerceptronClassifier$$anonfun.apply(MultilayerPerceptronClassifier.scala:165)
at org.apache.spark.ml.classification.MultilayerPerceptronClassifier$$anonfun.apply(MultilayerPerceptronClassifier.scala:165)
at scala.collection.Iterator$$anon.next(Iterator.scala:370)
at scala.collection.Iterator$GroupedIterator.takeDestructively(Iterator.scala:934)
at scala.collection.Iterator$GroupedIterator.go(Iterator.scala:949)
at scala.collection.Iterator$GroupedIterator.fill(Iterator.scala:986)
at scala.collection.Iterator$GroupedIterator.hasNext(Iterator.scala:990)
at scala.collection.Iterator$$anon.hasNext(Iterator.scala:369)
at org.apache.spark.util.Utils$.getIteratorSize(Utils.scala:1595)
at org.apache.spark.rdd.RDD$$anonfun$count.apply(RDD.scala:1143)
at org.apache.spark.rdd.RDD$$anonfun$count.apply(RDD.scala:1143)
at org.apache.spark.SparkContext$$anonfun$runJob.apply(SparkContext.scala:1858)
at org.apache.spark.SparkContext$$anonfun$runJob.apply(SparkContext.scala:1858)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
[trace] Stack trace suppressed: run last compile:run for the full output.
java.lang.RuntimeException: Nonzero exit code: 1
at scala.sys.package$.error(package.scala:27)
[trace] Stack trace suppressed: run last compile:run for the full output.
[error] (compile:run) Nonzero exit code: 1
[error] Total time: 19 s, completed 06-Apr-2016 12:42:20
为什么我收到 ArrayIndexOutOfBoundsException,我没有正确设置标签?标签只是标签就不能取值吗?在此示例中,它们似乎必须在 0-3 范围内?
出现最后一层(我正在自己编写我的第一个严肃示例)来表示四舍五入为 int 值的标签,因此通过将 4 声明为该值,您期望标签为 0,1,2,3 - 显然,该代码旨在创建一个神经网络,以根据输入将输出分类为一系列状态 - 我正在尝试确定如何使用此功能编写井字游戏播放器
输出层使用one-hot encoding; that is, a label of "3" is converted to (0,0,0,1) where the 'third' element is 1 and the rest are 0. When you have 4 output nodes and a label of 4, the LabelConverter function (whose source is visible here)会失败。 (labelCount
是 4,labeledPoint.label.toInt
是 4,因此你的错误。)
val output = Array.fill(labelCount)(0.0)
output(labeledPoint.label.toInt) = 1.0
(labeledPoint.features, Vectors.dense(output))
因此更改此行:
val layers = Array[Int](2, 3, 5, 4) // Note 2 neurons in the input layer
对此:
val layers = Array[Int](2, 3, 5, 5) // Note 2 neurons in the input layer and 5 neurons in the output layer
我希望它能起作用。