python scikit 学习,获取 LDA 中每个主题的文档

python scikit learn, get documents per topic in LDA

我正在对文本数据执行 LDA,使用示例 here: 我的问题是:
我如何知道哪些文档对应于哪个主题? 换句话说,例如,讨论主题 1 的文档是什么?

这是我的步骤:

n_features = 1000
n_topics = 8
n_top_words = 20

我逐行阅读我的文本文件:

with open('dataset.txt', 'r') as data_file:
    input_lines = [line.strip() for line in data_file.readlines()]
    mydata = [line for line in input_lines]

打印主题的函数:

def print_top_words(model, feature_names, n_top_words):
    for topic_idx, topic in enumerate(model.components_):
        print("Topic #%d:" % topic_idx)
        print(" ".join([feature_names[i]
                        for i in topic.argsort()[:-n_top_words - 1:-1]]))                        

    print()

对数据进行矢量化:

tf_vectorizer = CountVectorizer(max_df=0.95, min_df=2, token_pattern='\b\w{2,}\w+\b',
                                max_features=n_features,
                                stop_words='english')
tf = tf_vectorizer.fit_transform(mydata)

正在初始化 LDA:

lda = LatentDirichletAllocation(n_topics=3, max_iter=5,
                                learning_method='online',
                                learning_offset=50.,
                                random_state=0)

运行 tf数据上的LDA:

lda.fit(tf)

使用上面的函数打印结果:

print("\nTopics in LDA model:")
tf_feature_names = tf_vectorizer.get_feature_names()

print_top_words(lda, tf_feature_names, n_top_words)

打印的输出是:

Topics in LDA model:
Topic #0:
solar road body lamp power battery energy beacon
Topic #1:
skin cosmetic hair extract dermatological aging production active
Topic #2:
cosmetic oil water agent block emulsion ingredients mixture

http://scikit-learn.org/stable/modules/generated/sklearn.decomposition.LatentDirichletAllocation.html#sklearn.decomposition.LatentDirichletAllocation.transform

转换方法将文档词矩阵 X 和 returns X 的文档主题分布作为输入。

因此,如果您在每个文档中调用转换传递,您就可以查找那些包含您感兴趣的主题的高(足以满足您的目的)单词比例的文档。

您需要对数据进行转换:

doc_topic = lda.transform(tf)

并像这样列出文档及其得分最高的主题:

for n in range(doc_topic.shape[0]):
    topic_most_pr = doc_topic[n].argmax()
    print("doc: {} topic: {}\n".format(n,topic_most_pr))