获取每个文档的最高术语 - scikit tf-idf
Get the top term per document - scikit tf-idf
在使用 scikit's tf-idf vectorizer 向量化多个文档后,有没有办法让每个文档获得最多 'influential' 个术语?
不过,我只找到了为整个语料库获取最多 'influential' 个术语的方法,而不是为每个文档。
假设您从一个数据集开始:
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfTransformer
import numpy as np
from sklearn.datasets import fetch_20newsgroups
d = fetch_20newsgroups()
使用计数向量器和 tfidf:
count_vect = CountVectorizer()
X_train_counts = count_vect.fit_transform(d.data)
transformer = TfidfTransformer()
X_train_tfidf = transformer.fit_transform(X_train_counts)
现在您可以创建逆向映射了:
m = {v: k for (k, v) in count_vect.vocabulary_.items()}
这给出了每个文档的有影响力的词:
[m[t] for t in np.array(np.argmax(X_train_tfidf, axis=1)).flatten()]
只是在 Ami 的最后两步中添加了另一种方法:
# Get a list of all the keywords by calling function
feature_names = np.array(count_vect.get_feature_names())
feature_names[X_train_tfidf.argmax(axis=1)]
在使用 scikit's tf-idf vectorizer 向量化多个文档后,有没有办法让每个文档获得最多 'influential' 个术语?
不过,我只找到了为整个语料库获取最多 'influential' 个术语的方法,而不是为每个文档。
假设您从一个数据集开始:
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfTransformer
import numpy as np
from sklearn.datasets import fetch_20newsgroups
d = fetch_20newsgroups()
使用计数向量器和 tfidf:
count_vect = CountVectorizer()
X_train_counts = count_vect.fit_transform(d.data)
transformer = TfidfTransformer()
X_train_tfidf = transformer.fit_transform(X_train_counts)
现在您可以创建逆向映射了:
m = {v: k for (k, v) in count_vect.vocabulary_.items()}
这给出了每个文档的有影响力的词:
[m[t] for t in np.array(np.argmax(X_train_tfidf, axis=1)).flatten()]
只是在 Ami 的最后两步中添加了另一种方法:
# Get a list of all the keywords by calling function
feature_names = np.array(count_vect.get_feature_names())
feature_names[X_train_tfidf.argmax(axis=1)]