scikit-learn 中 TF-IDF 向量的分组特征
Group features of TF-IDF vector in scikit-learn
我正在使用 scikit-learn 通过以下代码来训练基于 TF-IDF 特征向量的文本分类模型:
model = naive_bayes.MultinomialNB()
feature_vector_train = TfidfVectorizer().fit_transform(X)
model.fit(self.feature_vector_train, Y)
我需要将提取的特征按照其 TF-IDF 权重的降序排列,并将它们分为两组不重叠的特征,最后训练两个不同的分类模型。如何将主特征向量分为奇数组和偶数组?
您的 TfidfVectorizer
的结果是一个 n x m
矩阵 n
是文档的数量,m
是唯一单词的数量。因此,feature_vector_train
中的每一列都对应于数据集中的一个特定单词。从 this tutorial 改编解决方案应该可以让您提取最高和最低权重的词:
vectorizer = TfidfVectorizer()
feature_vector_train = vectorizer.fit_transform(X)
feature_names = vectorizer.get_feature_names()
total_tfidf_weights = feature_vector_train.sum(axis=0) #this assumes you only want a straight sum of each feature's weight across all documents
#alternatively, you could use vectorizer.transform(feature_names) to get the values of each feature in isolation
#sort the feature names and the tfidf weights together by zipping them
sorted_names_weights = sorted(zip(feature_names, total_tfidf_Weights), key = lambda x: x[1]), reversed=True) #the key argument tells sorted according to column 1. reversed means sort from largest to smallest
#unzip the names and weights
sorted_features_names, sorted_total_tfidf_weights = zip(*sorted_names_weights)
从这一点开始,您应该可以根据需要分离功能。将它们分成两组后,group1
和 group2
,您可以将它们分成两个矩阵,如下所示:
#create a feature_name to column index mapping
column_mapping = dict((name, i) for i, name, in enumerate(feature_names))
#get the submatrices
group1_column_indexes = [column_mapping[feat] for feat in group1]
group1_feature_vector_train = feature_vector_train[:,group1_column_indexes] #all rows, but only group1 columns
group2_column_indexes = [column_mapping[feat] for feat in group2]
group2_feature_vector_train = feature_vector_train[:,group2_column_indexes]
我正在使用 scikit-learn 通过以下代码来训练基于 TF-IDF 特征向量的文本分类模型:
model = naive_bayes.MultinomialNB()
feature_vector_train = TfidfVectorizer().fit_transform(X)
model.fit(self.feature_vector_train, Y)
我需要将提取的特征按照其 TF-IDF 权重的降序排列,并将它们分为两组不重叠的特征,最后训练两个不同的分类模型。如何将主特征向量分为奇数组和偶数组?
您的 TfidfVectorizer
的结果是一个 n x m
矩阵 n
是文档的数量,m
是唯一单词的数量。因此,feature_vector_train
中的每一列都对应于数据集中的一个特定单词。从 this tutorial 改编解决方案应该可以让您提取最高和最低权重的词:
vectorizer = TfidfVectorizer()
feature_vector_train = vectorizer.fit_transform(X)
feature_names = vectorizer.get_feature_names()
total_tfidf_weights = feature_vector_train.sum(axis=0) #this assumes you only want a straight sum of each feature's weight across all documents
#alternatively, you could use vectorizer.transform(feature_names) to get the values of each feature in isolation
#sort the feature names and the tfidf weights together by zipping them
sorted_names_weights = sorted(zip(feature_names, total_tfidf_Weights), key = lambda x: x[1]), reversed=True) #the key argument tells sorted according to column 1. reversed means sort from largest to smallest
#unzip the names and weights
sorted_features_names, sorted_total_tfidf_weights = zip(*sorted_names_weights)
从这一点开始,您应该可以根据需要分离功能。将它们分成两组后,group1
和 group2
,您可以将它们分成两个矩阵,如下所示:
#create a feature_name to column index mapping
column_mapping = dict((name, i) for i, name, in enumerate(feature_names))
#get the submatrices
group1_column_indexes = [column_mapping[feat] for feat in group1]
group1_feature_vector_train = feature_vector_train[:,group1_column_indexes] #all rows, but only group1 columns
group2_column_indexes = [column_mapping[feat] for feat in group2]
group2_feature_vector_train = feature_vector_train[:,group2_column_indexes]