元素映射变坏

Mapping of elements gone bad

我正在实施 k-means 并且我想创建新的质心。但是映射遗漏了一个元素!但是,当 K 的值较小时,如 15,它将正常工作。

基于此 code 我有:

val K = 25 // number of clusters
val data = sc.textFile("dense.txt").map(
     t => (t.split("#")(0), parseVector(t.split("#")(1)))).cache()
val count = data.count()
println("Number of records " + count)

var centroids = data.takeSample(false, K, 42).map(x => x._2)
do {
  var closest = data.map(p => (closestPoint(p._2, centroids), p._2))
  var pointsGroup = closest.groupByKey()
  println(pointsGroup)
  pointsGroup.foreach { println }
  var newCentroids = pointsGroup.mapValues(ps => average(ps.toSeq)).collectAsMap()
  //var newCentroids = pointsGroup.mapValues(ps => average(ps)).collectAsMap() this will produce an error
  println(centroids.size)
  println(newCentroids.size)
  for (i <- 0 until K) {
    tempDist += centroids(i).squaredDist(newCentroids(i))
  }
  ..

并且在 for 循环中,我会得到它找不到元素的错误(这并不总是相同的,它取决于 K:

java.util.NoSuchElementException: key not found: 2

错误出现前的输出:

Number of records 27776
ShuffledRDD[5] at groupByKey at kmeans.scala:72
25
24            <- IT SHOULD BE 25

有什么问题?


>>> println(newCentroids)
Map(23 -> (-0.0050852959701492536, 0.005512245104477607, -0.004460964477611937), 17 -> (-0.005459583045685268, 0.0029015278781725795, -8.451635532994901E-4), 8 -> (-4.691649213483123E-4, 0.0025375451685393366, 0.0063490755505617585), 11 -> (0.30361112034069937, -0.0017342255382385204, -0.005751167731061906), 20 -> (-5.839587918939964E-4, -0.0038189763756820145, -0.007067070459859708), 5 -> (-0.3787612396704685, -0.005814121628643806, -0.0014961713117870657), 14 -> (0.0024755681263616547, 0.0015191503267973836, 0.003411769193899781), 13 -> (-0.002657690932944597, 0.0077671050923225635, -0.0034652379980563263), 4 -> (-0.006963114731610361, 1.1751361829025871E-4, -0.7481135105367823), 22 -> (0.015318187079953534, -1.2929035958285013, -0.0044176372190034684), 7 -> (-0.002321059060773483, -0.006316359116022083, 0.006164669723756913), 16 -> (0.005341800955165691, -0.0017540737037037035, 0.004066574093567247), 1 -> (0.0024547379611650484, 0.0056298656504855955, 0.002504618082524296), 10 -> (3.421068671121009E-4, 0.0045169004751299275, 5.696239049740164E-4), 19 -> (-0.005453716071428539, -0.001450277556818192, 0.003860007248376626), 9 -> (-0.0032921685273631807, 1.8477108457711313E-4, -0.003070412228855717), 18 -> (-0.0026803160958904053, 0.00913904078767124, -0.0023528013698630146), 3 -> (0.005750011594202901, -0.003607098309178754, -0.003615918896940412), 21 -> (0.0024925166025641056, -0.0037607353461538507, -2.1588444871794858E-4), 12 -> (-7.920202960526356E-4, 0.5390774232894769, -4.928884539473694E-4), 15 -> (-0.0018608492323232324, -0.006973787272727284, -0.0027266663434343404), 24 -> (6.151173211963486E-4, 7.081812613784045E-4, 5.612962808842611E-4), 6 -> (0.005323933953732931, 0.0024014750473186123, -2.969338590956889E-4), 0 -> (-0.0015991676750160377, -0.003001317289659613, 0.5384176139563245))

有相关错误的问题:spark scala throws java.util.NoSuchElementException: key not found: 0 exception


编辑:

经过zero323观察发现两个质心相同,我修改了代码,使得所有质心都是唯一的。但是,行为保持不变。出于这个原因,我怀疑 closestPoint() 可能 return 两个质心的相同索引。这是函数:

  def closestPoint(p: Vector, centers: Array[Vector]): Int = {
    var index = 0
    var bestIndex = 0
    var closest = Double.PositiveInfinity
    for (i <- 0 until centers.length) {
      val tempDist = p.squaredDist(centers(i))
      if (tempDist < closest) {
        closest = tempDist
        bestIndex = i
      }
    }
    return bestIndex
  }

如何解决这个问题?我是 运行 我在 Spark cluster.

中描述的代码

在 "E-step"(将点分配给聚类索引类似于 EM 算法的 E 步)中可能会发生您的其中一个索引不会被分配任何点的情况。如果发生这种情况,那么您需要有一种将该索引与某个点相关联的方法,否则您将在 "M-step" 之后得到更少的簇(将质心分配给索引类似于 M- EM 算法的步骤。)像这样的东西应该可以工作:

val newCentroids = {
  val temp = pointsGroup.mapValues(ps => average(ps.toSeq)).collectAsMap()
  val nMissing = K - temp.size 
  val sample = data.takeSample(false, nMissing, seed)
  var c = -1
  (for (i <- 0 until K) yield {
   val point = temp.getOrElse(i, {c += 1; sample(c) })
   (i, point)
  }).toMap      
}   

只需将该代码替换为您当前用于计算 newCentroids 的行。

还有其他方法可以解决这个问题,上面的方法可能不是最好的(多次调用 takeSample 是个好主意吗,k-means 的每次迭代一次算法?如果 data 包含大量重复值怎么办?等等),但这是一个简单的起点。

顺便说一句,您可能想考虑一下如何将 groupByKey 替换为 reduceByKey

注意:出于好奇,这里有一个描述 EM 算法和 k 均值算法之间相似之处的参考:http://papers.nips.cc/paper/989-convergence-properties-of-the-k-means-algorithms.pdf