在本地执行 PySpark 时出现错误 "java.lang.UnsupportedOperationException: empty.maxBy"

Getting the error "java.lang.UnsupportedOperationException: empty.maxBy" while executing PySpark locally

我正在使用 RandomForestCLassifier 构建模型。这是我的代码,

conf = SparkConf()
conf.setAppName('spark-nltk')
sc = SparkContext(conf=conf)
sqlContext = SQLContext(sc)
m=sc.textFile("Question_Type_Classification_testing_purpose/data/TREC_10.txt").map(lambda s: s.split(" ",1))
df= m.toDF()

创建的数据框有两个默认列名“_1”和“_2”。 “_1”列有标签,“_2”列有训练数据,它们是纯文本句子。 我正在执行以下步骤来创建模型,

tokenizer = Tokenizer(inputCol="_2", outputCol="words")
tok= tokenizer.transform(df) 
hashingTF = HashingTF(inputCol="words", outputCol="raw_features")
h=hashingTF.transform(tok)
indexer = StringIndexer(inputCol='_1', outputCol="idxlabel").fit(df)
idx=indexer.transform(h)  
lr = RandomForestClassifier(labelCol="idxlabel").setFeaturesCol("raw_features")  
model=lr.fit(idx) 

我知道我每次都可以使用 Pipeline() 方法来代替执行 transform(),但我还需要使用我的自定义 POS 标记器,并且在使用自定义转换器持久化管道时存在一些问题。所以我发明了使用标准库。我发出以下命令来提交我的 spark 作业,

spark-submit --driver-memory 5g Question_Type_Classification_testing_purpose/spark-nltk.py

因为我是运行本地作业,当spark以本地模式运行时,我将我的executor内存设置为5g,因为worker在driver中运行。

17/04/01 02:59:19 INFO Executor: Running task 1.0 in stage 15.0 (TID 25)
17/04/01 02:59:19 INFO Executor: Running task 0.0 in stage 15.0 (TID 24)
17/04/01 02:59:19 INFO BlockManager: Found block rdd_38_1 locally
17/04/01 02:59:19 INFO BlockManager: Found block rdd_38_0 locally
17/04/01 02:59:19 INFO Executor: Finished task 1.0 in stage 15.0 (TID 25). 2432 bytes result sent to driver
17/04/01 02:59:19 INFO Executor: Finished task 0.0 in stage 15.0 (TID 24). 2432 bytes result sent to driver
17/04/01 02:59:19 INFO TaskSetManager: Finished task 1.0 in stage 15.0 (TID 25) in 390 ms on localhost (executor driver) (1/2)
17/04/01 02:59:19 INFO TaskSetManager: Finished task 0.0 in stage 15.0 (TID 24) in 390 ms on localhost (executor driver) (2/2)
17/04/01 02:59:19 INFO TaskSchedulerImpl: Removed TaskSet 15.0, whose tasks have all completed, from pool
17/04/01 02:59:19 INFO DAGScheduler: ShuffleMapStage 15 (mapPartitions at RandomForest.scala:534) finished in 0.390 s
17/04/01 02:59:19 INFO DAGScheduler: looking for newly runnable stages
17/04/01 02:59:19 INFO DAGScheduler: running: Set()
17/04/01 02:59:19 INFO DAGScheduler: waiting: Set(ResultStage 16)
17/04/01 02:59:19 INFO DAGScheduler: failed: Set()
17/04/01 02:59:19 INFO DAGScheduler: Submitting ResultStage 16 (MapPartitionsRDD[47] at map at RandomForest.scala:553), which has no missing parents
17/04/01 02:59:19 INFO MemoryStore: Block broadcast_20 stored as values in memory (estimated size 2.6 MB, free 1728.4 MB)
17/04/01 02:59:19 INFO MemoryStore: Block broadcast_20_piece0 stored as bytes in memory (estimated size 57.0 KB, free 1728.4 MB)
17/04/01 02:59:19 INFO BlockManagerInfo: Added broadcast_20_piece0 in memory on 192.168.56.1:55850 (size: 57.0 KB, free: 1757.9 MB)
17/04/01 02:59:19 INFO SparkContext: Created broadcast 20 from broadcast at DAGScheduler.scala:996
17/04/01 02:59:19 INFO DAGScheduler: Submitting 2 missing tasks from ResultStage 16 (MapPartitionsRDD[47] at map at RandomForest.scala:553)
17/04/01 02:59:19 INFO TaskSchedulerImpl: Adding task set 16.0 with 2 tasks
17/04/01 02:59:19 INFO TaskSetManager: Starting task 0.0 in stage 16.0 (TID 26, localhost, executor driver, partition 0, ANY, 5848 bytes)
17/04/01 02:59:19 INFO TaskSetManager: Starting task 1.0 in stage 16.0 (TID 27, localhost, executor driver, partition 1, ANY, 5848 bytes)
17/04/01 02:59:19 INFO Executor: Running task 0.0 in stage 16.0 (TID 26)
17/04/01 02:59:19 INFO Executor: Running task 1.0 in stage 16.0 (TID 27)
17/04/01 02:59:19 INFO ShuffleBlockFetcherIterator: Getting 2 non-empty blocks out of 2 blocks
17/04/01 02:59:19 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 0 ms
17/04/01 02:59:19 INFO ShuffleBlockFetcherIterator: Getting 2 non-empty blocks out of 2 blocks
17/04/01 02:59:19 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 0 ms
17/04/01 02:59:19 INFO Executor: Finished task 1.0 in stage 16.0 (TID 27). 14434 bytes result sent to driver
17/04/01 02:59:19 INFO TaskSetManager: Finished task 1.0 in stage 16.0 (TID 27) in 78 ms on localhost (executor driver) (1/2)
17/04/01 02:59:19 ERROR Executor: Exception in task 0.0 in stage 16.0 (TID 26)
java.lang.UnsupportedOperationException: empty.maxBy
    at scala.collection.TraversableOnce$class.maxBy(TraversableOnce.scala:236)
    at scala.collection.SeqViewLike$AbstractTransformed.maxBy(SeqViewLike.scala:37)
    at org.apache.spark.ml.tree.impl.RandomForest$.binsToBestSplit(RandomForest.scala:831)
    at org.apache.spark.ml.tree.impl.RandomForest$$anonfun.apply(RandomForest.scala:561)
    at org.apache.spark.ml.tree.impl.RandomForest$$anonfun.apply(RandomForest.scala:553)
    at scala.collection.Iterator$$anon.next(Iterator.scala:409)
    at scala.collection.Iterator$class.foreach(Iterator.scala:893)
    at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
    at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:59)
    at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:104)
    at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:48)
    at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:310)
    at scala.collection.AbstractIterator.to(Iterator.scala:1336)
    at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:302)
    at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1336)
    at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:289)
    at scala.collection.AbstractIterator.toArray(Iterator.scala:1336)
    at org.apache.spark.rdd.RDD$$anonfun$collect$$anonfun.apply(RDD.scala:935)
    at org.apache.spark.rdd.RDD$$anonfun$collect$$anonfun.apply(RDD.scala:935)
    at org.apache.spark.SparkContext$$anonfun$runJob.apply(SparkContext.scala:1944)
    at org.apache.spark.SparkContext$$anonfun$runJob.apply(SparkContext.scala:1944)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
    at org.apache.spark.scheduler.Task.run(Task.scala:99)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:282)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
17/04/01 02:59:19 WARN TaskSetManager: Lost task 0.0 in stage 16.0 (TID 26, localhost, executor driver): java.lang.UnsupportedOperationException: empty.maxBy
    at scala.collection.TraversableOnce$class.maxBy(TraversableOnce.scala:236)
    at scala.collection.SeqViewLike$AbstractTransformed.maxBy(SeqViewLike.scala:37)
    at org.apache.spark.ml.tree.impl.RandomForest$.binsToBestSplit(RandomForest.scala:831)
    at org.apache.spark.ml.tree.impl.RandomForest$$anonfun.apply(RandomForest.scala:561)
    at org.apache.spark.ml.tree.impl.RandomForest$$anonfun.apply(RandomForest.scala:553)
    at scala.collection.Iterator$$anon.next(Iterator.scala:409)
    at scala.collection.Iterator$class.foreach(Iterator.scala:893)
    at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
    at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:59)
    at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:104)
    at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:48)
    at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:310)
    at scala.collection.AbstractIterator.to(Iterator.scala:1336)
    at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:302)
    at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1336)
    at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:289)
    at scala.collection.AbstractIterator.toArray(Iterator.scala:1336)
    at org.apache.spark.rdd.RDD$$anonfun$collect$$anonfun.apply(RDD.scala:935)
    at org.apache.spark.rdd.RDD$$anonfun$collect$$anonfun.apply(RDD.scala:935)
    at org.apache.spark.SparkContext$$anonfun$runJob.apply(SparkContext.scala:1944)
    at org.apache.spark.SparkContext$$anonfun$runJob.apply(SparkContext.scala:1944)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
    at org.apache.spark.scheduler.Task.run(Task.scala:99)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:282)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)

17/04/01 02:59:19 ERROR TaskSetManager: Task 0 in stage 16.0 failed 1 times; aborting job
17/04/01 02:59:19 INFO TaskSchedulerImpl: Removed TaskSet 16.0, whose tasks have all completed, from pool
17/04/01 02:59:19 INFO TaskSchedulerImpl: Cancelling stage 16
17/04/01 02:59:19 INFO DAGScheduler: ResultStage 16 (collectAsMap at RandomForest.scala:563) failed in 0.328 s due to Job aborted due to stage failure: Task 0 in stage 16.0 failed 1 times, m
failure: Lost task 0.0 in stage 16.0 (TID 26, localhost, executor driver): java.lang.UnsupportedOperationException: empty.maxBy
    at scala.collection.TraversableOnce$class.maxBy(TraversableOnce.scala:236)
    at scala.collection.SeqViewLike$AbstractTransformed.maxBy(SeqViewLike.scala:37)
    at org.apache.spark.ml.tree.impl.RandomForest$.binsToBestSplit(RandomForest.scala:831)
    at org.apache.spark.ml.tree.impl.RandomForest$$anonfun.apply(RandomForest.scala:561)
    at org.apache.spark.ml.tree.impl.RandomForest$$anonfun.apply(RandomForest.scala:553)
    at scala.collection.Iterator$$anon.next(Iterator.scala:409)
    at scala.collection.Iterator$class.foreach(Iterator.scala:893)
    at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
    at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:59)
    at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:104)
    at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:48)
    at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:310)
    at scala.collection.AbstractIterator.to(Iterator.scala:1336)
    at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:302)
    at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1336)
    at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:289)
    at scala.collection.AbstractIterator.toArray(Iterator.scala:1336)
    at org.apache.spark.rdd.RDD$$anonfun$collect$$anonfun.apply(RDD.scala:935)
    at org.apache.spark.rdd.RDD$$anonfun$collect$$anonfun.apply(RDD.scala:935)
    at org.apache.spark.SparkContext$$anonfun$runJob.apply(SparkContext.scala:1944)
    at org.apache.spark.SparkContext$$anonfun$runJob.apply(SparkContext.scala:1944)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
    at org.apache.spark.scheduler.Task.run(Task.scala:99)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:282)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)

Driver stacktrace:
17/04/01 02:59:19 INFO DAGScheduler: Job 11 failed: collectAsMap at RandomForest.scala:563, took 1.057042 s
Traceback (most recent call last):
  File "C:/SPARK2.0/bin/Question_Type_Classification_testing_purpose/spark-nltk2.py", line 178, in <module>
    model=lr.fit(idx)
  File "C:\SPARK2.0\python\lib\pyspark.zip\pyspark\ml\base.py", line 64, in fit
  File "C:\SPARK2.0\python\lib\pyspark.zip\pyspark\ml\wrapper.py", line 236, in _fit
  File "C:\SPARK2.0\python\lib\pyspark.zip\pyspark\ml\wrapper.py", line 233, in _fit_java
  File "C:\SPARK2.0\python\lib\py4j-0.10.4-src.zip\py4j\java_gateway.py", line 1133, in __call__
  File "C:\SPARK2.0\python\lib\pyspark.zip\pyspark\sql\utils.py", line 63, in deco
  File "C:\SPARK2.0\python\lib\py4j-0.10.4-src.zip\py4j\protocol.py", line 319, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o86.fit.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 16.0 failed 1 times, most recent failure: Lost task 0.0 in stage 16.0 (TID 26, localhost, executor driver
ng.UnsupportedOperationException: empty.maxBy
    at scala.collection.TraversableOnce$class.maxBy(TraversableOnce.scala:236)
    at scala.collection.SeqViewLike$AbstractTransformed.maxBy(SeqViewLike.scala:37)
    at org.apache.spark.ml.tree.impl.RandomForest$.binsToBestSplit(RandomForest.scala:831)
    at org.apache.spark.ml.tree.impl.RandomForest$$anonfun.apply(RandomForest.scala:561)
    at org.apache.spark.ml.tree.impl.RandomForest$$anonfun.apply(RandomForest.scala:553)
    at scala.collection.Iterator$$anon.next(Iterator.scala:409)
    at scala.collection.Iterator$class.foreach(Iterator.scala:893)
    at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
    at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:59)
    at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:104)
    at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:48)
    at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:310)
    at scala.collection.AbstractIterator.to(Iterator.scala:1336)
    at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:302)
    at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1336)
    at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:289)
    at scala.collection.AbstractIterator.toArray(Iterator.scala:1336)
    at org.apache.spark.rdd.RDD$$anonfun$collect$$anonfun.apply(RDD.scala:935)
    at org.apache.spark.rdd.RDD$$anonfun$collect$$anonfun.apply(RDD.scala:935)
    at org.apache.spark.SparkContext$$anonfun$runJob.apply(SparkContext.scala:1944)
    at org.apache.spark.SparkContext$$anonfun$runJob.apply(SparkContext.scala:1944)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
    at org.apache.spark.scheduler.Task.run(Task.scala:99)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:282)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)

Driver stacktrace:
    at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1435)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage.apply(DAGScheduler.scala:1423)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage.apply(DAGScheduler.scala:1422)
    at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
    at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1422)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed.apply(DAGScheduler.scala:802)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed.apply(DAGScheduler.scala:802)
    at scala.Option.foreach(Option.scala:257)
    at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:802)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1650)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1605)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1594)
    at org.apache.spark.util.EventLoop$$anon.run(EventLoop.scala:48)
    at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:628)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1918)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1931)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1944)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1958)
    at org.apache.spark.rdd.RDD$$anonfun$collect.apply(RDD.scala:935)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
    at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
    at org.apache.spark.rdd.RDD.collect(RDD.scala:934)
    at org.apache.spark.rdd.PairRDDFunctions$$anonfun$collectAsMap.apply(PairRDDFunctions.scala:748)
    at org.apache.spark.rdd.PairRDDFunctions$$anonfun$collectAsMap.apply(PairRDDFunctions.scala:747)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
    at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
    at org.apache.spark.rdd.PairRDDFunctions.collectAsMap(PairRDDFunctions.scala:747)
    at org.apache.spark.ml.tree.impl.RandomForest$.findBestSplits(RandomForest.scala:563)
    at org.apache.spark.ml.tree.impl.RandomForest$.run(RandomForest.scala:198)
    at org.apache.spark.ml.classification.RandomForestClassifier.train(RandomForestClassifier.scala:137)
    at org.apache.spark.ml.classification.RandomForestClassifier.train(RandomForestClassifier.scala:45)
    at org.apache.spark.ml.Predictor.fit(Predictor.scala:96)
    at org.apache.spark.ml.Predictor.fit(Predictor.scala:72)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
    at py4j.Gateway.invoke(Gateway.java:280)
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
    at py4j.commands.CallCommand.execute(CallCommand.java:79)
    at py4j.GatewayConnection.run(GatewayConnection.java:214)
    at java.lang.Thread.run(Thread.java:745)

Caused by: java.lang.UnsupportedOperationException: empty.maxBy
    at scala.collection.TraversableOnce$class.maxBy(TraversableOnce.scala:236)
    at scala.collection.SeqViewLike$AbstractTransformed.maxBy(SeqViewLike.scala:37)
    at org.apache.spark.ml.tree.impl.RandomForest$.binsToBestSplit(RandomForest.scala:831)
    at org.apache.spark.ml.tree.impl.RandomForest$$anonfun.apply(RandomForest.scala:561)
    at org.apache.spark.ml.tree.impl.RandomForest$$anonfun.apply(RandomForest.scala:553)
    at scala.collection.Iterator$$anon.next(Iterator.scala:409)
    at scala.collection.Iterator$class.foreach(Iterator.scala:893)
    at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
    at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:59)
    at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:104)
    at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:48)
    at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:310)
    at scala.collection.AbstractIterator.to(Iterator.scala:1336)
    at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:302)
    at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1336)
    at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:289)
    at scala.collection.AbstractIterator.toArray(Iterator.scala:1336)
    at org.apache.spark.rdd.RDD$$anonfun$collect$$anonfun.apply(RDD.scala:935)
    at org.apache.spark.rdd.RDD$$anonfun$collect$$anonfun.apply(RDD.scala:935)
    at org.apache.spark.SparkContext$$anonfun$runJob.apply(SparkContext.scala:1944)
    at org.apache.spark.SparkContext$$anonfun$runJob.apply(SparkContext.scala:1944)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
    at org.apache.spark.scheduler.Task.run(Task.scala:99)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:282)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    ... 1 more

17/04/01 02:59:20 INFO SparkContext: Invoking stop() from shutdown hook
17/04/01 02:59:20 INFO SparkUI: Stopped Spark web UI at http://192.168.56.1:4040

当我的spark作业提交时,我的可用物理RAM总共8G是6G,所以我将驱动程序内存设置为5g。最初,我给了 4g,但出现 "OutOfMemory" 错误,所以我将其更改为 5。我的数据集大小非常小。它由900条记录组成,以纯文本语句的形式存在,文件大小为50Kb。

此错误的可能原因是什么?我尝试减少和增加数据大小,但没有任何反应。任何人都可以让我知道我做错了什么吗?我需要设置任何其他 conf 变量吗?是因为 RF 吗?非常感谢任何帮助。我在具有 4 个内核的 Windows 机器上使用 PySpark 2.1 和 Python 2.7。

我不确定 RandomForest 有什么问题,但是如果我使用 DecisionTreesClassifier,上面的代码就可以工作。可能由于 RF 是一种集成方法,它可能会尝试对导致错误的数据集进行一些操作。但是,使用 DT,它工作正常。

目前,这是我能想到的唯一解决方案。如果有人想出解决上述问题的合适方法,请告诉我。

我在spark 2.1训练一个运行domforests分类器的时候也遇到了这个问题。因为之前我用的是spark 2.0,所以没有这个问题。所以我尝试了 spark 2.0.2。我的程序 运行 顺利通过。

至于错误,我的猜测是数据没有均匀分区。也许尝试在训练之前重新划分数据。另外,不要设置太多分区。否则每个分区可能只有很少的数据点,这可能会导致训练中出现一些边缘情况。

那是 Spark 问题。将在 2.2 版本中修复。查看详细信息: https://issues.apache.org/jira/browse/SPARK-18036