使用 SparseVector PySpark 创建数据框

Create a dataframe with SparseVector PySpark

假设我有一个如下所示的 Spark 数据框

Row(Y=a, X1=3.2, X2=4.5)

我想要的是:

Row(Y=a, features=SparseVector(2, {X1: 3.2, X2: 4.5})

也许这有帮助-

Written in scala but can be implemented in pyspark with minimal change

用于从输入列创建向量的 VectorAssembler

val df = spark.sql("select 'a' as Y, 3.2 as X1, 4.5 as X2")
    df.show(false)
    df.printSchema()

    /**
      * +---+---+---+
      * |Y  |X1 |X2 |
      * +---+---+---+
      * |a  |3.2|4.5|
      * +---+---+---+
      *
      * root
      * |-- Y: string (nullable = false)
      * |-- X1: decimal(2,1) (nullable = false)
      * |-- X2: decimal(2,1) (nullable = false)
      */
    import org.apache.spark.ml.feature.VectorAssembler
    val features = new VectorAssembler()
      .setInputCols(Array("X1", "X2"))
      .setOutputCol("features")
      .transform(df)
    features.show(false)
    features.printSchema()

    /**
      * +---+---+---+---------+
      * |Y  |X1 |X2 |features |
      * +---+---+---+---------+
      * |a  |3.2|4.5|[3.2,4.5]|
      * +---+---+---+---------+
      *
      * root
      * |-- Y: string (nullable = false)
      * |-- X1: decimal(2,1) (nullable = false)
      * |-- X2: decimal(2,1) (nullable = false)
      * |-- features: vector (nullable = true)
      */