如何在 LDA 模型中获取新文档的主题
How to get topic of new document in LDA model
如何在LDA模型中动态传递用户给定的.txt
文档?
我已经尝试了下面的代码,但它无法提供正确的文档主题。我的 .txt
主题与 Sports 有关,因此主题名称应为 Sports。它的输出为:
Score: 0.5569453835487366 - Topic: 0.008*"bike" + 0.005*"game" + 0.005*"team" + 0.004*"run" + 0.004*"virginia"
Score: 0.370819091796875 - Topic: 0.016*"game" + 0.014*"team" + 0.011*"play" + 0.008*"hockey" + 0.008*"player"
Score: 0.061239391565322876 -Topic: 0.010*"card" + 0.010*"window" + 0.008*"driver" + 0.007*"sale" + 0.006*"price"*
data = df.content.values.tolist()
data = [re.sub('\S*@\S*\s?', '', sent) for sent in data]
data = [re.sub('\s+', ' ', sent) for sent in data]
data = [re.sub("\'", "", sent) for sent in data]
def sent_to_words(sentences):
for sentence in sentences:
yield(gensim.utils.simple_preprocess(str(sentence), deacc=True)) # deacc=True removes punctuations
data_words = list(sent_to_words(data))
bigram = gensim.models.Phrases(data_words, min_count=5, threshold=100) # higher threshold fewer phrases.
trigram = gensim.models.Phrases(bigram[data_words], threshold=100)
bigram_mod = gensim.models.phrases.Phraser(bigram)
trigram_mod = gensim.models.phrases.Phraser(trigram)
def remove_stopwords(texts):
return [[word for word in simple_preprocess(str(doc)) if word not in stop_words] for doc in texts]
def make_bigrams(texts):
return [bigram_mod[doc] for doc in texts]
def make_trigrams(texts):
return [trigram_mod[bigram_mod[doc]] for doc in texts]
def lemmatization(texts, allowed_postags=['NOUN', 'ADJ', 'VERB', 'ADV']):
texts_out = []
for sent in texts:
doc = nlp(" ".join(sent))
texts_out.append([token.lemma_ for token in doc if token.pos_ in allowed_postags])
return texts_out
# Remove Stop Words
data_words_nostops = remove_stopwords(data_words)
# Form Bigrams
data_words_bigrams = make_bigrams(data_words_nostops)
data_lemmatized = lemmatization(data_words_bigrams, allowed_postags=['NOUN', 'ADJ', 'VERB', 'ADV'])
id2word = gensim.corpora.Dictionary(data_lemmatized)
texts = data_lemmatized
corpus = [id2word.doc2bow(text) for text in texts]
# Build LDA model
lda_model = gensim.models.ldamodel.LdaModel(corpus=corpus,
id2word=id2word,
num_topics=20,
random_state=100,
update_every=1,
chunksize=100,
passes=10,
alpha='auto',
per_word_topics=True)
#f = io.open("text.txt", mode="r", encoding="utf-8")
p=open("text.txt", "r") #document by the user which is related to sports
if p.mode == 'r':
content = p.read()
bow_vector = id2word.doc2bow(lemmatization(p))
for index, score in sorted(lda_model[bow_vector], key=lambda tup: -1*tup[1]):
print("Score: {}\t Topic: {}".format(score, lda_model.print_topic(index, 5)))
你所有的代码都是正确的,但我认为你对 LDA 建模的期望可能有点偏差。您收到的输出是正确的!
首先,您使用了短语"topic name"; LDA 生成的主题没有名称,并且它们没有到用于训练模型的数据标签的简单映射。这是一个无监督模型,您通常会使用没有标签的数据来训练 LDA。如果你的语料库包含属于 classes A、B、C、D 的文档,并且你训练 LDA 模型输出四个主题 L、M、N、O,它不遵循,存在一些映射,如:
A -> M
B -> L
C -> O
D -> N
其次,注意输出中标记和主题之间的区别。 LDA 的输出类似于:
主题 1:0.5 - 0.005*"token_13" + 0.003*"token_204" + ...
主题 2:0.07 - 0.01*"token_24" + 0.001*"token_3" + ...
换句话说,每个文档都被赋予属于每个主题的概率。每个主题都由每个语料库标记的总和组成,这些标记以某种方式加权以唯一定义该主题。
人们很想查看每个主题中权重最大的标记,并将这些主题解释为 class。例如:
# If you have:
topic_1 = 0.1*"dog" + 0.08*"cat" + 0.04*"snake"
# It's tempting to name topic_1 = pets
但这很难验证,并且在很大程度上依赖于人类的直觉。 LDA 更常见的用法是当您没有标签时,您想要识别哪些文档在语义上彼此相似,而不必确定文档的正确 class 标签是什么。
经过大量尝试,这对我有用,如果您有不同之处,请发表评论。
bow_vector = dictionary.doc2bow(preprocess(content))
q= lda_model[bow_vector]
from operator import itemgetter
res = max(q, key = itemgetter(1))[0]
res1 = max(q, key = itemgetter(1))[1]
if (res == 1 ):
print("This .txt file is related to Politics/Government, Accuracy:",res1)
elif (res == 2) :
print("This .txt file is related to sports, Accuracy:",res1)
elif res==3:
print("This .txt file is related to Computer, Accuracy:",res1)
elif..... (so on)
else.
如何在LDA模型中动态传递用户给定的.txt
文档?
我已经尝试了下面的代码,但它无法提供正确的文档主题。我的 .txt
主题与 Sports 有关,因此主题名称应为 Sports。它的输出为:
Score: 0.5569453835487366 - Topic: 0.008*"bike" + 0.005*"game" + 0.005*"team" + 0.004*"run" + 0.004*"virginia"
Score: 0.370819091796875 - Topic: 0.016*"game" + 0.014*"team" + 0.011*"play" + 0.008*"hockey" + 0.008*"player"
Score: 0.061239391565322876 -Topic: 0.010*"card" + 0.010*"window" + 0.008*"driver" + 0.007*"sale" + 0.006*"price"*
data = df.content.values.tolist()
data = [re.sub('\S*@\S*\s?', '', sent) for sent in data]
data = [re.sub('\s+', ' ', sent) for sent in data]
data = [re.sub("\'", "", sent) for sent in data]
def sent_to_words(sentences):
for sentence in sentences:
yield(gensim.utils.simple_preprocess(str(sentence), deacc=True)) # deacc=True removes punctuations
data_words = list(sent_to_words(data))
bigram = gensim.models.Phrases(data_words, min_count=5, threshold=100) # higher threshold fewer phrases.
trigram = gensim.models.Phrases(bigram[data_words], threshold=100)
bigram_mod = gensim.models.phrases.Phraser(bigram)
trigram_mod = gensim.models.phrases.Phraser(trigram)
def remove_stopwords(texts):
return [[word for word in simple_preprocess(str(doc)) if word not in stop_words] for doc in texts]
def make_bigrams(texts):
return [bigram_mod[doc] for doc in texts]
def make_trigrams(texts):
return [trigram_mod[bigram_mod[doc]] for doc in texts]
def lemmatization(texts, allowed_postags=['NOUN', 'ADJ', 'VERB', 'ADV']):
texts_out = []
for sent in texts:
doc = nlp(" ".join(sent))
texts_out.append([token.lemma_ for token in doc if token.pos_ in allowed_postags])
return texts_out
# Remove Stop Words
data_words_nostops = remove_stopwords(data_words)
# Form Bigrams
data_words_bigrams = make_bigrams(data_words_nostops)
data_lemmatized = lemmatization(data_words_bigrams, allowed_postags=['NOUN', 'ADJ', 'VERB', 'ADV'])
id2word = gensim.corpora.Dictionary(data_lemmatized)
texts = data_lemmatized
corpus = [id2word.doc2bow(text) for text in texts]
# Build LDA model
lda_model = gensim.models.ldamodel.LdaModel(corpus=corpus,
id2word=id2word,
num_topics=20,
random_state=100,
update_every=1,
chunksize=100,
passes=10,
alpha='auto',
per_word_topics=True)
#f = io.open("text.txt", mode="r", encoding="utf-8")
p=open("text.txt", "r") #document by the user which is related to sports
if p.mode == 'r':
content = p.read()
bow_vector = id2word.doc2bow(lemmatization(p))
for index, score in sorted(lda_model[bow_vector], key=lambda tup: -1*tup[1]):
print("Score: {}\t Topic: {}".format(score, lda_model.print_topic(index, 5)))
你所有的代码都是正确的,但我认为你对 LDA 建模的期望可能有点偏差。您收到的输出是正确的!
首先,您使用了短语"topic name"; LDA 生成的主题没有名称,并且它们没有到用于训练模型的数据标签的简单映射。这是一个无监督模型,您通常会使用没有标签的数据来训练 LDA。如果你的语料库包含属于 classes A、B、C、D 的文档,并且你训练 LDA 模型输出四个主题 L、M、N、O,它不遵循,存在一些映射,如:
A -> M
B -> L
C -> O
D -> N
其次,注意输出中标记和主题之间的区别。 LDA 的输出类似于:
主题 1:0.5 - 0.005*"token_13" + 0.003*"token_204" + ...
主题 2:0.07 - 0.01*"token_24" + 0.001*"token_3" + ...
换句话说,每个文档都被赋予属于每个主题的概率。每个主题都由每个语料库标记的总和组成,这些标记以某种方式加权以唯一定义该主题。
人们很想查看每个主题中权重最大的标记,并将这些主题解释为 class。例如:
# If you have:
topic_1 = 0.1*"dog" + 0.08*"cat" + 0.04*"snake"
# It's tempting to name topic_1 = pets
但这很难验证,并且在很大程度上依赖于人类的直觉。 LDA 更常见的用法是当您没有标签时,您想要识别哪些文档在语义上彼此相似,而不必确定文档的正确 class 标签是什么。
经过大量尝试,这对我有用,如果您有不同之处,请发表评论。
bow_vector = dictionary.doc2bow(preprocess(content))
q= lda_model[bow_vector]
from operator import itemgetter
res = max(q, key = itemgetter(1))[0]
res1 = max(q, key = itemgetter(1))[1]
if (res == 1 ):
print("This .txt file is related to Politics/Government, Accuracy:",res1)
elif (res == 2) :
print("This .txt file is related to sports, Accuracy:",res1)
elif res==3:
print("This .txt file is related to Computer, Accuracy:",res1)
elif..... (so on)
else.