为什么 Spark 的 GaussianMixture return 相同的集群?

Why does Spark's GaussianMixture return identical clusters?

我正在使用 spark-1.5.2 对使用 GaussianMixture 的数据集进行聚类。除了结果 GaussianMixtureModels 并且它们的权重相同之外,没有其他错误发生。达到指定公差所需的迭代次数约为 2,这似乎太低了。

我可以调整哪些参数以形成具有不同值的簇?

import org.apache.spark.SparkContext
import org.apache.spark.rdd._
import org.apache.spark.mllib.clustering.GaussianMixture
import org.apache.spark.mllib.linalg.{Vector, Vectors}

def sparkContext: SparkContext = {
  import org.apache.spark.SparkConf
  new SparkContext(new SparkConf().setMaster("local[*]").setAppName("console"))
}

implicit val sc = sparkContext

def observationsRdd(implicit sc: SparkContext): RDD[Vector] = {
  sc.textFile("observations.csv")
    .map { line => Vectors.dense(line.split(",").map { _.toDouble }) }
}

val gmm = {new GaussianMixture()
  .setK(6)
  .setMaxIterations(1000)
  .setConvergenceTol(0.001)
  .setSeed(1)
  .run(observationsRdd)}

for (i <- 0 until gmm.k) {
  println("weight=%f\nmu=%s\nsigma=\n%s\n" format
    (gmm.weights(i), gmm.gaussians(i).mu, gmm.gaussians(i).sigma))
}

截断的输出:

weight=0.166667
mu=[4730.358845338535,4391.695550847029,4072.3224046605947,4253.183898304653,4454.124682202946,4775.553442796136,4980.3952860164545,4812.717637711368,5120.44449152493,2820.1827330505857,180.10291313557565,4189.185858050445,3690.793644067457]
sigma=
422700.24745093845  382225.3248240414   398121.9356855869   ... (13 total)
382225.3248240414   471186.33178427175  455777.0565262309   ...
398121.9356855869   455777.0565262309   461210.0532084378   ...
469361.3787142044   497432.39963363775  515341.1303306988   ...
474369.6318494179   482754.83801426284  500047.5114985542   ...
453832.62301188655  443147.58931290614  461017.7038258409   ...
458641.51202210854  433511.1974652861   452015.6655154465   ...
387980.29836054996  459673.3283909025   455118.78272128507  ...
461724.87201332086  423688.91832506843  442649.18455604656  ...
291940.48273324646  257309.1054220978   269116.23674394307  ...
16289.3063964479    14790.06803739929   15387.484828872432  ...
334045.5231910066   338403.3492767321   350531.7768916226   ...
280036.0894114749   267624.69326772855  279651.401859903    ...

weight=0.166667
mu=[4730.358845338535,4391.695550847029,4072.3224046605947,4253.183898304653,4454.124682202946,4775.553442796136,4980.3952860164545,4812.717637711368,5120.44449152493,2820.1827330505857,180.10291313557565,4189.185858050445,3690.793644067457]
sigma=
422700.24745093845  382225.3248240414   398121.9356855869   ... (13 total)
382225.3248240414   471186.33178427175  455777.0565262309   ...
398121.9356855869   455777.0565262309   461210.0532084378   ...
469361.3787142044   497432.39963363775  515341.1303306988   ...
474369.6318494179   482754.83801426284  500047.5114985542   ...
453832.62301188655  443147.58931290614  461017.7038258409   ...
458641.51202210854  433511.1974652861   452015.6655154465   ...
387980.29836054996  459673.3283909025   455118.78272128507  ...
461724.87201332086  423688.91832506843  442649.18455604656  ...
291940.48273324646  257309.1054220978   269116.23674394307  ...
16289.3063964479    14790.06803739929   15387.484828872432  ...
334045.5231910066   338403.3492767321   350531.7768916226   ...
280036.0894114749   267624.69326772855  279651.401859903    ...

weight=0.166667
mu=[4730.358845338535,4391.695550847029,4072.3224046605947,4253.183898304653,4454.124682202946,4775.553442796136,4980.3952860164545,4812.717637711368,5120.44449152493,2820.1827330505857,180.10291313557565,4189.185858050445,3690.793644067457]
sigma=
422700.24745093845  382225.3248240414   398121.9356855869   ... (13 total)
382225.3248240414   471186.33178427175  455777.0565262309   ...
398121.9356855869   455777.0565262309   461210.0532084378   ...
469361.3787142044   497432.39963363775  515341.1303306988   ...
474369.6318494179   482754.83801426284  500047.5114985542   ...
453832.62301188655  443147.58931290614  461017.7038258409   ...
458641.51202210854  433511.1974652861   452015.6655154465   ...
387980.29836054996  459673.3283909025   455118.78272128507  ...
461724.87201332086  423688.91832506843  442649.18455604656  ...
291940.48273324646  257309.1054220978   269116.23674394307  ...
16289.3063964479    14790.06803739929   15387.484828872432  ...
334045.5231910066   338403.3492767321   350531.7768916226   ...
280036.0894114749   267624.69326772855  279651.401859903    ...

weight=0.166667
mu=[4730.358845338535,4391.695550847029,4072.3224046605947,4253.183898304653,4454.124682202946,4775.553442796136,4980.3952860164545,4812.717637711368,5120.44449152493,2820.1827330505857,180.10291313557565,4189.185858050445,3690.793644067457]
sigma=
422700.24745093845  382225.3248240414   398121.9356855869   ... (13 total)
382225.3248240414   471186.33178427175  455777.0565262309   ...
398121.9356855869   455777.0565262309   461210.0532084378   ...
469361.3787142044   497432.39963363775  515341.1303306988   ...
474369.6318494179   482754.83801426284  500047.5114985542   ...
453832.62301188655  443147.58931290614  461017.7038258409   ...
458641.51202210854  433511.1974652861   452015.6655154465   ...
387980.29836054996  459673.3283909025   455118.78272128507  ...
461724.87201332086  423688.91832506843  442649.18455604656  ...
291940.48273324646  257309.1054220978   269116.23674394307  ...
16289.3063964479    14790.06803739929   15387.484828872432  ...
334045.5231910066   338403.3492767321   350531.7768916226   ...
280036.0894114749   267624.69326772855  279651.401859903    ...

weight=0.166667
mu=[4730.358845338535,4391.695550847029,4072.3224046605947,4253.183898304653,4454.124682202946,4775.553442796136,4980.3952860164545,4812.717637711368,5120.44449152493,2820.1827330505857,180.10291313557565,4189.185858050445,3690.793644067457]
sigma=
422700.24745093845  382225.3248240414   398121.9356855869   ... (13 total)
382225.3248240414   471186.33178427175  455777.0565262309   ...
398121.9356855869   455777.0565262309   461210.0532084378   ...
469361.3787142044   497432.39963363775  515341.1303306988   ...
474369.6318494179   482754.83801426284  500047.5114985542   ...
453832.62301188655  443147.58931290614  461017.7038258409   ...
458641.51202210854  433511.1974652861   452015.6655154465   ...
387980.29836054996  459673.3283909025   455118.78272128507  ...
461724.87201332086  423688.91832506843  442649.18455604656  ...
291940.48273324646  257309.1054220978   269116.23674394307  ...
16289.3063964479    14790.06803739929   15387.484828872432  ...
334045.5231910066   338403.3492767321   350531.7768916226   ...
280036.0894114749   267624.69326772855  279651.401859903    ...

weight=0.166667
mu=[4730.358845338535,4391.695550847029,4072.3224046605947,4253.183898304653,4454.124682202946,4775.553442796136,4980.3952860164545,4812.717637711368,5120.44449152493,2820.1827330505857,180.10291313557565,4189.185858050445,3690.793644067457]
sigma=
422700.24745093845  382225.3248240414   398121.9356855869   ... (13 total)
382225.3248240414   471186.33178427175  455777.0565262309   ...
398121.9356855869   455777.0565262309   461210.0532084378   ...
469361.3787142044   497432.39963363775  515341.1303306988   ...
474369.6318494179   482754.83801426284  500047.5114985542   ...
453832.62301188655  443147.58931290614  461017.7038258409   ...
458641.51202210854  433511.1974652861   452015.6655154465   ...
387980.29836054996  459673.3283909025   455118.78272128507  ...
461724.87201332086  423688.91832506843  442649.18455604656  ...
291940.48273324646  257309.1054220978   269116.23674394307  ...
16289.3063964479    14790.06803739929   15387.484828872432  ...
334045.5231910066   338403.3492767321   350531.7768916226   ...
280036.0894114749   267624.69326772855  279651.401859903    ...

...

此外,代码、输入数据和输出数据作为要点提供 @ https://gist.github.com/aaron-santos/91b4931a446c460e082b2b3055b9950f

谢谢

我通过 ELKI 运行 你的数据(我不得不删除最后一行,这是不完整的)。它起初也不起作用,我认为这是由于属性的 scale 以及默认初始化所致。 Spark 中可能存在相同的问题。

缩放数据后,我可以使用 ELKI 得到一些合理的聚类(可视化 13 个维度中的前三个):

但是从数据点的分布来看我认为高斯混合建模不适合这个数据。这些点似乎是从某些超曲面或某些轨迹中进行网格采样的;不是来自高斯 (!) 分布。

以下是我使用的ELKI参数:

-dbc.in /tmp/observations.csv
-dbc.filter normalization.columnwise.AttributeWiseVarianceNormalization
-algorithm clustering.em.EM -em.k 6
-em.centers RandomlyChosenInitialMeans -kmeans.seed 0

可能值得尝试其他聚类算法,例如 HDBSCAN,它可以识别基于密度的聚类:

参数:

-dbc.in /tmp/observations.csv
-dbc.filter normalization.columnwise.AttributeWiseVarianceNormalization
-algorithm clustering.hierarchical.extraction.HDBSCANHierarchyExtraction
-algorithm SLINKHDBSCANLinearMemory
-hdbscan.minPts 50 -hdbscan.minclsize 100

我也会尝试 OPTICS,因为我发现 HDBSCAN 通常只捕获集群的核心(按设计)。从 OPTICS 图中,我不会说集群的定义非常明确。

除了尝试其他聚类算法之外,我认为您还需要在数据的预处理和投影方面做很多工作,因为它具有非常强的相关性。尝试将尽可能多的数据先验知识放入预处理中以改进结果。