Scikit Learn TfidfVectorizer:如何获得最高 tf-idf 分数的前 n 个术语
Scikit Learn TfidfVectorizer : How to get top n terms with highest tf-idf score
我正在处理关键字提取问题。考虑非常普遍的情况
from sklearn.feature_extraction.text import TfidfVectorizer
tfidf = TfidfVectorizer(tokenizer=tokenize, stop_words='english')
t = """Two Travellers, walking in the noonday sun, sought the shade of a widespreading tree to rest. As they lay looking up among the pleasant leaves, they saw that it was a Plane Tree.
"How useless is the Plane!" said one of them. "It bears no fruit whatever, and only serves to litter the ground with leaves."
"Ungrateful creatures!" said a voice from the Plane Tree. "You lie here in my cooling shade, and yet you say I am useless! Thus ungratefully, O Jupiter, do men receive their blessings!"
Our best blessings are often the least appreciated."""
tfs = tfidf.fit_transform(t.split(" "))
str = 'tree cat travellers fruit jupiter'
response = tfidf.transform([str])
feature_names = tfidf.get_feature_names()
for col in response.nonzero()[1]:
print(feature_names[col], ' - ', response[0, col])
这给了我
(0, 28) 0.443509712811
(0, 27) 0.517461475101
(0, 8) 0.517461475101
(0, 6) 0.517461475101
tree - 0.443509712811
travellers - 0.517461475101
jupiter - 0.517461475101
fruit - 0.517461475101
哪个好。对于任何新出现的文档,有没有办法获得 tfidf 得分最高的前 n 个术语?
您必须做一些歌舞才能将矩阵改为 numpy 数组,但这应该可以满足您的需求:
feature_array = np.array(tfidf.get_feature_names())
tfidf_sorting = np.argsort(response.toarray()).flatten()[::-1]
n = 3
top_n = feature_array[tfidf_sorting][:n]
这给了我:
array([u'fruit', u'travellers', u'jupiter'],
dtype='<U13')
argsort
调用非常有用,here are the docs for it。我们必须做 [::-1]
因为 argsort
只支持从小到大排序。我们调用 flatten
将维度减少到 1d,以便排序后的索引可用于索引 1d 特征数组。请注意,包含对 flatten
的调用仅在您一次测试一个文档时才有效。
另外,请注意,您的意思是像 tfs = tfidf.fit_transform(t.split("\n\n"))
吗?否则,多行字符串中的每个术语都被视为 "document"。使用 \n\n
意味着我们实际上正在查看 4 个文档(每行一个),这在您考虑 tfidf 时更有意义。
使用稀疏矩阵本身的解决方案(没有.toarray()
)!
import numpy as np
from sklearn.feature_extraction.text import TfidfVectorizer
tfidf = TfidfVectorizer(stop_words='english')
corpus = [
'I would like to check this document',
'How about one more document',
'Aim is to capture the key words from the corpus',
'frequency of words in a document is called term frequency'
]
X = tfidf.fit_transform(corpus)
feature_names = np.array(tfidf.get_feature_names())
new_doc = ['can key words in this new document be identified?',
'idf is the inverse document frequency caculcated for each of the words']
responses = tfidf.transform(new_doc)
def get_top_tf_idf_words(response, top_n=2):
sorted_nzs = np.argsort(response.data)[:-(top_n+1):-1]
return feature_names[response.indices[sorted_nzs]]
print([get_top_tf_idf_words(response,2) for response in responses])
#[array(['key', 'words'], dtype='<U9'),
array(['frequency', 'words'], dtype='<U9')]
这是一个快速代码:
(documents
是一个列表)
def get_tfidf_top_features(documents,n_top=10):
fidf_vectorizer = TfidfVectorizer(max_df=0.95, min_df=2, max_features=no_features, stop_words='english')
tfidf = tfidf_vectorizer.fit_transform(documents)
importance = np.argsort(np.asarray(tfidf.sum(axis=0)).ravel())[::-1]
tfidf_feature_names = np.array(tfidf_vectorizer.get_feature_names())
return tfidf_feature_names[importance[:n_top]]
我正在处理关键字提取问题。考虑非常普遍的情况
from sklearn.feature_extraction.text import TfidfVectorizer
tfidf = TfidfVectorizer(tokenizer=tokenize, stop_words='english')
t = """Two Travellers, walking in the noonday sun, sought the shade of a widespreading tree to rest. As they lay looking up among the pleasant leaves, they saw that it was a Plane Tree.
"How useless is the Plane!" said one of them. "It bears no fruit whatever, and only serves to litter the ground with leaves."
"Ungrateful creatures!" said a voice from the Plane Tree. "You lie here in my cooling shade, and yet you say I am useless! Thus ungratefully, O Jupiter, do men receive their blessings!"
Our best blessings are often the least appreciated."""
tfs = tfidf.fit_transform(t.split(" "))
str = 'tree cat travellers fruit jupiter'
response = tfidf.transform([str])
feature_names = tfidf.get_feature_names()
for col in response.nonzero()[1]:
print(feature_names[col], ' - ', response[0, col])
这给了我
(0, 28) 0.443509712811
(0, 27) 0.517461475101
(0, 8) 0.517461475101
(0, 6) 0.517461475101
tree - 0.443509712811
travellers - 0.517461475101
jupiter - 0.517461475101
fruit - 0.517461475101
哪个好。对于任何新出现的文档,有没有办法获得 tfidf 得分最高的前 n 个术语?
您必须做一些歌舞才能将矩阵改为 numpy 数组,但这应该可以满足您的需求:
feature_array = np.array(tfidf.get_feature_names())
tfidf_sorting = np.argsort(response.toarray()).flatten()[::-1]
n = 3
top_n = feature_array[tfidf_sorting][:n]
这给了我:
array([u'fruit', u'travellers', u'jupiter'],
dtype='<U13')
argsort
调用非常有用,here are the docs for it。我们必须做 [::-1]
因为 argsort
只支持从小到大排序。我们调用 flatten
将维度减少到 1d,以便排序后的索引可用于索引 1d 特征数组。请注意,包含对 flatten
的调用仅在您一次测试一个文档时才有效。
另外,请注意,您的意思是像 tfs = tfidf.fit_transform(t.split("\n\n"))
吗?否则,多行字符串中的每个术语都被视为 "document"。使用 \n\n
意味着我们实际上正在查看 4 个文档(每行一个),这在您考虑 tfidf 时更有意义。
使用稀疏矩阵本身的解决方案(没有.toarray()
)!
import numpy as np
from sklearn.feature_extraction.text import TfidfVectorizer
tfidf = TfidfVectorizer(stop_words='english')
corpus = [
'I would like to check this document',
'How about one more document',
'Aim is to capture the key words from the corpus',
'frequency of words in a document is called term frequency'
]
X = tfidf.fit_transform(corpus)
feature_names = np.array(tfidf.get_feature_names())
new_doc = ['can key words in this new document be identified?',
'idf is the inverse document frequency caculcated for each of the words']
responses = tfidf.transform(new_doc)
def get_top_tf_idf_words(response, top_n=2):
sorted_nzs = np.argsort(response.data)[:-(top_n+1):-1]
return feature_names[response.indices[sorted_nzs]]
print([get_top_tf_idf_words(response,2) for response in responses])
#[array(['key', 'words'], dtype='<U9'),
array(['frequency', 'words'], dtype='<U9')]
这是一个快速代码:
(documents
是一个列表)
def get_tfidf_top_features(documents,n_top=10):
fidf_vectorizer = TfidfVectorizer(max_df=0.95, min_df=2, max_features=no_features, stop_words='english')
tfidf = tfidf_vectorizer.fit_transform(documents)
importance = np.argsort(np.asarray(tfidf.sum(axis=0)).ravel())[::-1]
tfidf_feature_names = np.array(tfidf_vectorizer.get_feature_names())
return tfidf_feature_names[importance[:n_top]]