从 Spark 中的类别列表创建一个热编码向量

Create one hot encoded vector from category list in Spark

如果我有包含 5 个类别(A、B、C、D、E)的数据和一个客户数据集,其中每个客户可以属于一个、多个或 none 个类别。我怎样才能得到这样的数据集:

id, categories
1 , [A,C]
2 , [B]
3 , []
4 , [D,E]

并将类别列转换为一个热编码向量,如下所示

id, categories, encoded
1 , [A,C]     , [1,0,1,0,0]
2 , [B]       , [0,1,0,0,0]
3 , []        , [0,0,0,0,0]
4 , [D,E]     , [0,0,0,1,1]

有没有人找到在 spark 中执行此操作的简单方法?

使用 CountVectorizerModel

非常容易做到,这在某种程度上是相同的
val df = spark.createDataFrame(Seq(
  (1, Seq("A","C")),
  (2, Seq("B")),
  (3, Seq()),
  (4, Seq("D","E")))
).toDF("id", "category")

val cvm = new CountVectorizerModel(Array("A","B","C","D","E"))
  .setInputCol("category")
  .setOutputCol("features")

cvm.transform(df).show()

/*
+---+--------+-------------------+
| id|category|           features|
+---+--------+-------------------+
|  1|  [A, C]|(5,[0,2],[1.0,1.0])|
|  2|     [B]|      (5,[1],[1.0])|
|  3|      []|          (5,[],[])|
|  4|  [D, E]|(5,[3,4],[1.0,1.0])|
+---+--------+-------------------+
*/

这与您想要的不完全一样,但特征向量会告诉您数据中存在哪些类别。例如,在第 1 行中,[0,2] 对应于字典的第一个和第三个元素,或那里写的 "A" 和 "C"。

要获得所需的输出,您可以使用 Spark 的 UDF(用户定义的函数)扩展 Stephen Carman 答案:

// Prepare training documents from a list of (id, text, label) tuples.
val data = spark.createDataFrame(Seq(
  (0L, Seq("A", "B")),
  (1L, Seq("B")),
  (2L, Seq.empty),
  (3L, Seq("D", "E"))
)).toDF("id", "categories")

// Get distinct tags array
val tags = data
  .flatMap(r ⇒ r.getAs[Seq[String]]("categories"))
  .distinct()
  .collect()
  .sortWith(_ < _)

val cvmData = new CountVectorizerModel(tags)
  .setInputCol("categories")
  .setOutputCol("sparseFeatures")
  .transform(data)

val asDense = udf((v: Vector) ⇒ v.toDense)

cvmData
  .withColumn("features", asDense($"sparseFeatures"))
  .select("id", "categories", "features")
  .show()

这会给你想要的输出

+---+----------+-----------------+
| id|categories|         features|
+---+----------+-----------------+
|  0|    [A, B]|[1.0,1.0,0.0,0.0]|
|  1|       [B]|[0.0,1.0,0.0,0.0]|
|  2|        []|[0.0,0.0,0.0,0.0]|
|  3|    [D, E]|[0.0,0.0,1.0,1.0]|
+---+----------+-----------------+