PySpark 中的广播随机森林模型

Broadcast Random-Forest Model in PySpark

我正在使用 spark 1.4.1。当我尝试广播随机森林模型时,它显示了这个错误:

Traceback (most recent call last):
  File "/gpfs/haifa/home/d/a/davidbi/codeBook/Nice.py", line 358, in <module>
broadModel = sc.broadcast(model)
  File "/opt/apache/spark-1.4.1-bin-hadoop2.4_doop/python/lib/pyspark.zip/pyspark/context.py", line 698, in broadcast
  File "/opt/apache/spark-1.4.1-bin-hadoop2.4_doop/python/lib/pyspark.zip/pyspark/broadcast.py", line 70, in __init__
  File "/opt/apache/spark-1.4.1-bin-hadoop2.4_doop/python/lib/pyspark.zip/pyspark/broadcast.py", line 78, in dump
File "/opt/apache/spark-1.4.1-bin-hadoop2.4_doop/python/lib/pyspark.zip/pyspark/context.py", line 252, in __getnewargs__
Exception: It appears that you are attempting to reference SparkContext from a broadcast variable, action, or transforamtion. SparkContext can only be used on the driver, not in code that it run on workers. For more information, see SPARK-5063.

我尝试执行的代码示例:

sc = SparkContext(appName= "Something")
model = RandomForest.trainRegressor(sc.parallelize(data), categoricalFeaturesInfo=categorical, numTrees=100, featureSubsetStrategy="auto", impurity='variance', maxDepth=4)
broadModel= sc.broadcast(model)

如果有人能帮助我,我将不胜感激! 非常感谢!

简短的回答是无法使用 PySpark。预测所需的 callJavaFunc 正在使用 SparkContext,因此出现错误。不过,使用 Scala API 可以做这样的事情。

在 Python 中,您可以使用与单个模型相同的方法,这意味着 model.predict 后跟 zip

models = [mode1, mode2, mode3]

predictions = [
    model.predict(testData.map(lambda x: x.features)) for model in models]

def flatten(x):
    if isinstance(x[0], tuple):
        return tuple(list(x[0]) + [x[1]])
    else:
        return x

(testData
   .map(lambda lp: lp.label)
   .zip(reduce(lambda p1, p2: p1.zip(p2).map(flatten), predictions)))

如果想进一步了解问题的根源,请查看: