PySpark 尝试 运行 Word2Vec 示例时出错

Error in PySpark trying to run Word2Vec example

我正在尝试运行此处文档中给出的非常简单的 Word2Vec 示例:

https://spark.apache.org/docs/1.4.1/api/python/_modules/pyspark/ml/feature.html#Word2Vec

from pyspark import SparkContext, SQLContext
from pyspark.mllib.feature import Word2Vec
sqlContext = SQLContext(sc)

sent = ("a b " * 100 + "a c " * 10).split(" ")
doc = sqlContext.createDataFrame([(sent,), (sent,)], ["sentence"])
model = Word2Vec(vectorSize=5, seed=42, inputCol="sentence", outputCol="model").fit(doc)
model.getVectors().show()
model.findSynonyms("a", 2).show()

TypeError                                 Traceback (most recent call last)
<ipython-input-4-e57e9f694961> in <module>()
      5 sent = ("a b " * 100 + "a c " * 10).split(" ")
      6 doc = sqlContext.createDataFrame([(sent,), (sent,)], ["sentence"])
----> 7 model = Word2Vec(vectorSize=5, seed=42, inputCol="sentence", outputCol="model").fit(doc)
      8 model.getVectors().show()
      9 model.findSynonyms("a", 2).show()

TypeError: __init__() got an unexpected keyword argument 'vectorSize'

知道为什么会失败吗?

您指的是 ml 中的文档,但从 mllib 包中导入。 In mllib Word2Vec__init__.
中不接受任何参数 你打算:

from pyspark.ml.feature import Word2Vec

输出:

+----+--------------------+
|word|              vector|
+----+--------------------+
|   a|[-0.3511952459812...|
|   b|[0.29077222943305...|
|   c|[0.02315592765808...|
+----+--------------------+

+----+-------------------+
|word|         similarity|
+----+-------------------+
|   b|0.29255685145799626|
|   c|-0.5414068302988307|
+----+-------------------+