在 Pyspark 中使用 UDF 函数时,密集向量应该是什么类型?
What Type should the dense vector be, when using UDF function in Pyspark?
我想在pySpark中把List改成Vector,然后用这个专栏来训练机器学习模型。但是我的spark版本是1.6.0,没有VectorUDT()
。那么在我的 udf 函数中我应该 return 什么类型呢?
from pyspark.sql import SQLContext
from pyspark import SparkContext, SparkConf
from pyspark.sql.functions import *
from pyspark.mllib.linalg import DenseVector
from pyspark.mllib.linalg import Vectors
from pyspark.sql.types import *
conf = SparkConf().setAppName('rank_test')
sc = SparkContext(conf=conf)
spark = SQLContext(sc)
df = spark.createDataFrame([[[0.1,0.2,0.3,0.4,0.5]]],['a'])
print '???'
df.show()
def list2vec(column):
print '?????',column
return Vectors.dense(column)
getVector = udf(lambda y: list2vec(y),DenseVector() )
df.withColumn('b',getVector(col('a'))).show()
我试过很多类型,这个DenseVector()
给我错误:
Traceback (most recent call last):
File "t.py", line 21, in <module>
getVector = udf(lambda y: list2vec(y),DenseVector() )
TypeError: __init__() takes exactly 2 arguments (1 given)
请帮帮我。
您可以将向量和 VectorUDT 与 UDF 一起使用,
from pyspark.ml.linalg import Vectors, VectorUDT
from pyspark.sql import functions as F
ud_f = F.udf(lambda r : Vectors.dense(r),VectorUDT())
df = df.withColumn('b',ud_f('a'))
df.show()
+-------------------------+---------------------+
|a |b |
+-------------------------+---------------------+
|[0.1, 0.2, 0.3, 0.4, 0.5]|[0.1,0.2,0.3,0.4,0.5]|
+-------------------------+---------------------+
df.printSchema()
root
|-- a: array (nullable = true)
| |-- element: double (containsNull = true)
|-- b: vector (nullable = true)
关于 VectorUDT,http://spark.apache.org/docs/2.2.0/api/python/_modules/pyspark/ml/linalg.html
我想在pySpark中把List改成Vector,然后用这个专栏来训练机器学习模型。但是我的spark版本是1.6.0,没有VectorUDT()
。那么在我的 udf 函数中我应该 return 什么类型呢?
from pyspark.sql import SQLContext
from pyspark import SparkContext, SparkConf
from pyspark.sql.functions import *
from pyspark.mllib.linalg import DenseVector
from pyspark.mllib.linalg import Vectors
from pyspark.sql.types import *
conf = SparkConf().setAppName('rank_test')
sc = SparkContext(conf=conf)
spark = SQLContext(sc)
df = spark.createDataFrame([[[0.1,0.2,0.3,0.4,0.5]]],['a'])
print '???'
df.show()
def list2vec(column):
print '?????',column
return Vectors.dense(column)
getVector = udf(lambda y: list2vec(y),DenseVector() )
df.withColumn('b',getVector(col('a'))).show()
我试过很多类型,这个DenseVector()
给我错误:
Traceback (most recent call last):
File "t.py", line 21, in <module>
getVector = udf(lambda y: list2vec(y),DenseVector() )
TypeError: __init__() takes exactly 2 arguments (1 given)
请帮帮我。
您可以将向量和 VectorUDT 与 UDF 一起使用,
from pyspark.ml.linalg import Vectors, VectorUDT
from pyspark.sql import functions as F
ud_f = F.udf(lambda r : Vectors.dense(r),VectorUDT())
df = df.withColumn('b',ud_f('a'))
df.show()
+-------------------------+---------------------+
|a |b |
+-------------------------+---------------------+
|[0.1, 0.2, 0.3, 0.4, 0.5]|[0.1,0.2,0.3,0.4,0.5]|
+-------------------------+---------------------+
df.printSchema()
root
|-- a: array (nullable = true)
| |-- element: double (containsNull = true)
|-- b: vector (nullable = true)
关于 VectorUDT,http://spark.apache.org/docs/2.2.0/api/python/_modules/pyspark/ml/linalg.html