识别出现在不到 1% 的语料库文档中的单词
Identify words that appear in less than 1% of the corpus documents
我有一个客户评论语料库,想识别稀有词,对我来说是指出现在语料库文档中不到 1% 的词。
我已经有了一个可行的解决方案,但它对我的脚本来说太慢了:
# Review data is a nested list of reviews, each represented as a bag of words
doc_clean = [['This', 'is', 'review', '1'], ['This', 'is', 'review', '2'], ..]
# Save all words of the corpus in a set
all_words = set([w for doc in doc_clean for w in doc])
# Initialize a list for the collection of rare words
rare_words = []
# Loop through all_words to identify rare words
for word in all_words:
# Count in how many reviews the word appears
counts = sum([word in set(review) for review in doc_clean])
# Add word to rare_words if it appears in less than 1% of the reviews
if counts / len(doc_clean) <= 0.01:
rare_words.append(word)
有谁知道更快的实现方式吗?通过每个单独的评论对每个单独的单词进行迭代似乎非常耗时。
提前致谢并致以最良好的祝愿,
马库斯
这可能不是最有效的解决方案,但它易于理解和维护,我自己也经常使用它。我使用计数器和 Pandas:
import pandas as pd
from collections import Counter
将计数器应用于每个文档并构建词频矩阵:
df = pd.DataFrame(list(map(Counter, doc_clean)))
矩阵中的某些字段未定义。它们对应于特定文档中未出现的词。计算出现次数:
counts = df.notnull().sum()
现在,select 出现频率不够高的词:
rare_words = counts[counts < 0.05 * len(doc_clean)].index.tolist()
我有一个客户评论语料库,想识别稀有词,对我来说是指出现在语料库文档中不到 1% 的词。
我已经有了一个可行的解决方案,但它对我的脚本来说太慢了:
# Review data is a nested list of reviews, each represented as a bag of words
doc_clean = [['This', 'is', 'review', '1'], ['This', 'is', 'review', '2'], ..]
# Save all words of the corpus in a set
all_words = set([w for doc in doc_clean for w in doc])
# Initialize a list for the collection of rare words
rare_words = []
# Loop through all_words to identify rare words
for word in all_words:
# Count in how many reviews the word appears
counts = sum([word in set(review) for review in doc_clean])
# Add word to rare_words if it appears in less than 1% of the reviews
if counts / len(doc_clean) <= 0.01:
rare_words.append(word)
有谁知道更快的实现方式吗?通过每个单独的评论对每个单独的单词进行迭代似乎非常耗时。
提前致谢并致以最良好的祝愿, 马库斯
这可能不是最有效的解决方案,但它易于理解和维护,我自己也经常使用它。我使用计数器和 Pandas:
import pandas as pd
from collections import Counter
将计数器应用于每个文档并构建词频矩阵:
df = pd.DataFrame(list(map(Counter, doc_clean)))
矩阵中的某些字段未定义。它们对应于特定文档中未出现的词。计算出现次数:
counts = df.notnull().sum()
现在,select 出现频率不够高的词:
rare_words = counts[counts < 0.05 * len(doc_clean)].index.tolist()