如何从 PySpark 中的矢量结构中获取项目
How to get item from vector struct in PySpark
我正在尝试从 TF-IDF 结果向量中获取分数数组。
例如:
rescaledData.select("words", "features").show()
+-----------------------------+---------------------------------------------------------------------------------------------+
|words |features |
+-----------------------------+---------------------------------------------------------------------------------------------+
|[a, b, c] |(4527,[0,1,31],[0.6363067860791387,1.0888040725098247,4.371858972705023]) |
|[d] |(4527,[8],[2.729945780576634]) |
+-----------------------------+---------------------------------------------------------------------------------------------+
rescaledData.select(rescaledData['features'].getItem('values')).show()
但是我得到了一个错误而不是数组。
AnalysisException: u"Can't extract value from features#1786: need struct type but got struct<type:tinyint,size:int,indices:array<int>,values:array<double>>;"
我要的是
+--------------------------+-----------------------------------------------------------+
|words |features |
+--------------------------+-----------------------------------------------------------+
|[a, b, c] |[0.6363067860791387, 1.0888040725098247, 4.371858972705023]|
+--------------------------+-----------------------------------------------------------+
如何解决这个问题?
准备数据
from pyspark.ml.linalg import Vectors, SparseVector
from pyspark.sql import Row
df = spark.createDataFrame(
[
[["a","b","c"], SparseVector(4527, {0:0.6363067860791387, 1:1.0888040725098247, 31:4.371858972705023})],
[["d"], SparseVector(4527, {8: 2.729945780576634})],
], ["word", "features"])
使用 rdd 获取稀疏向量的值
df.rdd.map(lambda x: Row(word=x["word"], features=x["features"].values.tolist())).toDF().show()
+--------------------+---------+
| features| word|
+--------------------+---------+
|[0.63630678607913...|[a, b, c]|
| [2.729945780576634]| [d]|
+--------------------+---------+
另一种选择是创建一个 udf 以从稀疏向量中获取值:
from pyspark.sql.functions import udf
from pyspark.sql.types import DoubleType, ArrayType
sparse_values = udf(lambda v: v.values.tolist(), ArrayType(DoubleType()))
df.withColumn("features", sparse_values("features")).show(truncate=False)
+---------+-----------------------------------------------------------+
|word |features |
+---------+-----------------------------------------------------------+
|[a, b, c]|[0.6363067860791387, 1.0888040725098247, 4.371858972705023]|
|[d] |[2.729945780576634] |
+---------+-----------------------------------------------------------+
我正在尝试从 TF-IDF 结果向量中获取分数数组。 例如:
rescaledData.select("words", "features").show()
+-----------------------------+---------------------------------------------------------------------------------------------+
|words |features |
+-----------------------------+---------------------------------------------------------------------------------------------+
|[a, b, c] |(4527,[0,1,31],[0.6363067860791387,1.0888040725098247,4.371858972705023]) |
|[d] |(4527,[8],[2.729945780576634]) |
+-----------------------------+---------------------------------------------------------------------------------------------+
rescaledData.select(rescaledData['features'].getItem('values')).show()
但是我得到了一个错误而不是数组。
AnalysisException: u"Can't extract value from features#1786: need struct type but got struct<type:tinyint,size:int,indices:array<int>,values:array<double>>;"
我要的是
+--------------------------+-----------------------------------------------------------+
|words |features |
+--------------------------+-----------------------------------------------------------+
|[a, b, c] |[0.6363067860791387, 1.0888040725098247, 4.371858972705023]|
+--------------------------+-----------------------------------------------------------+
如何解决这个问题?
准备数据
from pyspark.ml.linalg import Vectors, SparseVector
from pyspark.sql import Row
df = spark.createDataFrame(
[
[["a","b","c"], SparseVector(4527, {0:0.6363067860791387, 1:1.0888040725098247, 31:4.371858972705023})],
[["d"], SparseVector(4527, {8: 2.729945780576634})],
], ["word", "features"])
使用 rdd 获取稀疏向量的值
df.rdd.map(lambda x: Row(word=x["word"], features=x["features"].values.tolist())).toDF().show()
+--------------------+---------+
| features| word|
+--------------------+---------+
|[0.63630678607913...|[a, b, c]|
| [2.729945780576634]| [d]|
+--------------------+---------+
另一种选择是创建一个 udf 以从稀疏向量中获取值:
from pyspark.sql.functions import udf
from pyspark.sql.types import DoubleType, ArrayType
sparse_values = udf(lambda v: v.values.tolist(), ArrayType(DoubleType()))
df.withColumn("features", sparse_values("features")).show(truncate=False)
+---------+-----------------------------------------------------------+
|word |features |
+---------+-----------------------------------------------------------+
|[a, b, c]|[0.6363067860791387, 1.0888040725098247, 4.371858972705023]|
|[d] |[2.729945780576634] |
+---------+-----------------------------------------------------------+