SPARK:如何从 LabeledPoint 为决策树创建 categoricalFeaturesInfo?

SPARK: How to create categoricalFeaturesInfo for decision trees from LabeledPoint?

我在 上有一个 LabeledPoint 我想要 运行 决策树(以及后来的随机森林)

scala> transformedData.collect
res8: Array[org.apache.spark.mllib.regression.LabeledPoint] = Array((0.0,(400036,[7744],[2.0])), (0.0,(400036,[7744,8608],[3.0,3.0])), (0.0,(400036,[7744],[2.0])), (0.0,(400036,[133,218,2162,7460,7744,9567],[1.0,1.0,2.0,1.0,42.0,21.0])), (0.0,(400036,[133,218,1589,2162,2784,2922,3274,6914,7008,7131,7460,8608,9437,9567,199999,200021,200035,200048,200051,200056,200058,200064,200070,200072,200075,200087,400008,400011],[4.0,1.0,6.0,53.0,6.0,1.0,1.0,2.0,11.0,17.0,48.0,3.0,4.0,113.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,28.0,1.0,1.0,1.0,1.0,1.0,4.0])), (0.0,(400036,[1589,3585,4830,6935,6936,7744,400008,400011],[2.0,6.0,3.0,52.0,4.0,3.0,1.0,2.0])), (0.0,(400036,[1589,2162,2784,2922,4123,7008,7131,7792,8608],[23.0,70.0,1.0,2.0,2.0,1.0,1.0,2.0,2.0])), (0.0,(400036,[4830,6935,6936,400008,400011],[1.0,36.0,...

使用代码:

import org.apache.spark.mllib.tree.DecisionTree
import org.apache.spark.mllib.tree.model.DecisionTreeModel
import org.apache.spark.mllib.util.MLUtils
import org.apache.spark.mllib.tree.impurity.Gini

val numClasses = 2
val categoricalFeaturesInfo = Map[Int, Int]() //change to what?
val impurity = "gini"
val maxDepth = 5
val maxBins = 32

val model = DecisionTree.trainClassifier(
  trainingData, numClasses, categoricalFeaturesInfo, impurity, maxDepth, maxBins)

在我的数据中,我有两种类型的特征:

  1. 一些特征是用户访问给定 website/domain 的计数(特征是 website/domain,它的值是访问次数)

  2. 其余功能是一些声明性变量 - binary/categorical

    有没有办法从 LabeledPoint 自动创建 categoricalFeaturesInfo?我想检查我的声明变量(类型 2)的级别,然后让这些信息创建 categoricalFeaturesInfo.

我有一个包含声明变量的列表:

List(6363,21345,23455,...

categoricalFeaturesInfo 应该从一个索引映射到给定特征的 类 个数。一般来说,识别分类变量可能很昂贵,尤其是当这些变量与连续变量大量混合时。此外,根据您的数据,它可能会给出假阳性和假阴性。请记住,最好手动设置这些值。

如果您仍想自动创建 categoricalFeaturesInfo,可以查看 ml.feature.VectorIndexer。它不能直接适用于这种情况,但应该提供有用的代码库来构建您自己的解决方案。