Spark MLLib LDA结果解读

Interpretation of Spark MLLib LDA results

I 运行 LDA on spark for a set of documents and observed that the values of topicMatrix,它表示主题在术语上的分布,大于1,比如548.2201、685.2436、138.4013……什么是这些值是什么意思?这些是分布的对数值还是什么的。如何将这些值转换为概率分布值。 提前致谢。

在两个模型(即 DistributedLDAModelLocalLDAMoel)中,我相信 topicsMatrix 方法将 return(大约,由于Dirichlet prior on topics) 预期的单词-主题计数矩阵。要检查这一点,您可以采用该矩阵并对所有列求和。生成的向量(长度为主题计数大小)应该大约等于字数(在所有文档中)。无论如何,要获得主题(字典中单词的概率分布),您需要对列进行归一化矩阵 return 由 topicsMatrix 编辑,因此每个总和为 1.

我还没有完全测试过它,但是像这样的东西应该可以正常化由 topicsMatrix 编辑的 return 矩阵的列:

import breeze.linalg.{DenseVector => BDV}
import org.apache.spark.mllib.linalg._

def normalizeColumns(m: Matrix): DenseMatrix = {
  val bm = Matrices.toBreeze(m).toDenseMatrix
  val columnSums = BDV.zeros[Double](bm.cols).t
  var i = bm.rows
  while (i > 0) { i -= 1; columnSums += bm(i, ::) }
  i = bm.cols
  while (i > 0) { i -= 1; bm(::, i) /= columnSums(i) }
  new DenseMatrix(bm.rows, bm.cols, bm.data)
} 

在纯 scala 中标准化由 topicsMatrix 返回的矩阵的列

def formatSparkLDAWordOutput(wordTopMat: Matrix, wordMap: Map[Int, String]): scala.Predef.Map[String, Array[Double]] = {

// incoming word top matrix is in column-major order and the columns are unnormalized
val m = wordTopMat.numRows
val n = wordTopMat.numCols
val columnSums: Array[Double] = Range(0, n).map(j => (Range(0, m).map(i => wordTopMat(i, j)).sum)).toArray

val wordProbs: Seq[Array[Double]] = wordTopMat.transpose.toArray.grouped(n).toSeq
  .map(unnormProbs => unnormProbs.zipWithIndex.map({ case (u, j) => u / columnSums(j) }))

wordProbs.zipWithIndex.map({ case (topicProbs, wordInd) => (wordMap(wordInd), topicProbs) }).toMap

}

https://github.com/apache/incubator-spot/blob/v1.0-incubating/spot-ml/src/main/scala/org/apache/spot/lda/SpotLDAWrapper.scala#L237