使用潜在狄利克雷分配器捕获二元组主题而不是一元组
Capture bigram topics instead of unigrams using latent dirichlet allocat
我试着尝试 问题
LDA 原始输出
Uni-grams
topic1 -scuba,water,vapor,diving
topic2 -dioxide,plants,green,carbon
需要输出
Bi-gram topics
topic1 -scuba diving,water vapor
topic2 -green plants,carbon dioxide
还有这个答案
from nltk.util import ngrams
for doc in docs:
docs[doc] = docs[doc] + ["_".join(w) for w in ngrams(docs[doc], 2)]
任何帮助我应该进行什么更新才能只有双字母组?
仅创建包含双字母组的文档:
from nltk.util import ngrams
for doc in docs:
docs[doc] = ["_".join(w) for w in ngrams(docs[doc], 2)]
或者双字母的具体方法:
from nltk.util import bigrams
for doc in docs:
docs[doc] = ["_".join(w) for w in bigrams(docs[doc])]
然后在texts
中使用这些二元组的列表用于以后的操作。
我试着尝试
LDA 原始输出
Uni-grams
topic1 -scuba,water,vapor,diving
topic2 -dioxide,plants,green,carbon
需要输出
Bi-gram topics
topic1 -scuba diving,water vapor
topic2 -green plants,carbon dioxide
还有这个答案
from nltk.util import ngrams
for doc in docs:
docs[doc] = docs[doc] + ["_".join(w) for w in ngrams(docs[doc], 2)]
任何帮助我应该进行什么更新才能只有双字母组?
仅创建包含双字母组的文档:
from nltk.util import ngrams
for doc in docs:
docs[doc] = ["_".join(w) for w in ngrams(docs[doc], 2)]
或者双字母的具体方法:
from nltk.util import bigrams
for doc in docs:
docs[doc] = ["_".join(w) for w in bigrams(docs[doc])]
然后在texts
中使用这些二元组的列表用于以后的操作。