使用 scikit-learn 加载文本数据时出现问题?
Problems loading textual data with scikit-learn?
我正在使用自己的数据将一些数据分为两类,所以让:
from sklearn.datasets import load_files
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.naive_bayes import MultinomialNB
# Load the text data
categories = [
'CLASS_1',
'CLASS_2',
]
text_train_subset = load_files('train',
categories=categories)
text_test_subset = load_files('test',
categories=categories)
# Turn the text documents into vectors of word frequencies
vectorizer = CountVectorizer()
X_train = vectorizer.fit_transform(text_train_subset)
y_train = text_train_subset.target
classifier = MultinomialNB().fit(X_train, y_train)
print("Training score: {0:.1f}%".format(
classifier.score(X_train, y_train) * 100))
# Evaluate the classifier on the testing set
X_test = vectorizer.transform(text_test_subset.data)
y_test = text_test_subset.target
print("Testing score: {0:.1f}%".format(
classifier.score(X_test, y_test) * 100))
对于上述代码和 documentation,我有以下目录架构:
data_folder/
train_folder/
CLASS_1.txt CLASS_2.txt
test_folder/
test.txt
然后我得到这个错误:
% (size, n_samples))
ValueError: Found array with dim 0. Expected 5
我也试过fit_transform,但还是一样。我该如何解决这个维度问题?
第一个问题是你的目录结构不对。 You need it to be like
container_folder/
CLASS_1_folder/
file_1.txt, file_2.txt ...
CLASS_2_folder/
file_1.txt, file_2.txt, ....
您需要在此目录结构中同时拥有训练集和测试集。或者,您可以将所有数据放在一个目录中,然后使用 train_test_split 将其一分为二。
其次,
X_train = vectorizer.fit_transform(text_train_subset)
需要
X_train = vectorizer.fit_transform(text_train_subset.data) # added .data
这是一个完整的工作示例:
from sklearn.datasets import load_files
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.naive_bayes import MultinomialNB
text_train_subset = load_files('sample-data/web')
text_test_subset = text_train_subset # load your actual test data here
# Turn the text documents into vectors of word frequencies
vectorizer = CountVectorizer()
X_train = vectorizer.fit_transform(text_train_subset.data)
y_train = text_train_subset.target
classifier = MultinomialNB().fit(X_train, y_train)
print("Training score: {0:.1f}%".format(
classifier.score(X_train, y_train) * 100))
# Evaluate the classifier on the testing set
X_test = vectorizer.transform(text_test_subset.data)
y_test = text_test_subset.target
print("Testing score: {0:.1f}%".format(
classifier.score(X_test, y_test) * 100))
sample-data/web
的目录结构是
sample-data/web
├── de
│ ├── apollo8.txt
│ ├── fiv.txt
│ ├── habichtsadler.txt
└── en
├── elizabeth_needham.txt
├── equipartition_theorem.txt
├── sunderland_echo.txt
└── thespis.txt
我正在使用自己的数据将一些数据分为两类,所以让:
from sklearn.datasets import load_files
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.naive_bayes import MultinomialNB
# Load the text data
categories = [
'CLASS_1',
'CLASS_2',
]
text_train_subset = load_files('train',
categories=categories)
text_test_subset = load_files('test',
categories=categories)
# Turn the text documents into vectors of word frequencies
vectorizer = CountVectorizer()
X_train = vectorizer.fit_transform(text_train_subset)
y_train = text_train_subset.target
classifier = MultinomialNB().fit(X_train, y_train)
print("Training score: {0:.1f}%".format(
classifier.score(X_train, y_train) * 100))
# Evaluate the classifier on the testing set
X_test = vectorizer.transform(text_test_subset.data)
y_test = text_test_subset.target
print("Testing score: {0:.1f}%".format(
classifier.score(X_test, y_test) * 100))
对于上述代码和 documentation,我有以下目录架构:
data_folder/
train_folder/
CLASS_1.txt CLASS_2.txt
test_folder/
test.txt
然后我得到这个错误:
% (size, n_samples))
ValueError: Found array with dim 0. Expected 5
我也试过fit_transform,但还是一样。我该如何解决这个维度问题?
第一个问题是你的目录结构不对。 You need it to be like
container_folder/
CLASS_1_folder/
file_1.txt, file_2.txt ...
CLASS_2_folder/
file_1.txt, file_2.txt, ....
您需要在此目录结构中同时拥有训练集和测试集。或者,您可以将所有数据放在一个目录中,然后使用 train_test_split 将其一分为二。
其次,
X_train = vectorizer.fit_transform(text_train_subset)
需要
X_train = vectorizer.fit_transform(text_train_subset.data) # added .data
这是一个完整的工作示例:
from sklearn.datasets import load_files
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.naive_bayes import MultinomialNB
text_train_subset = load_files('sample-data/web')
text_test_subset = text_train_subset # load your actual test data here
# Turn the text documents into vectors of word frequencies
vectorizer = CountVectorizer()
X_train = vectorizer.fit_transform(text_train_subset.data)
y_train = text_train_subset.target
classifier = MultinomialNB().fit(X_train, y_train)
print("Training score: {0:.1f}%".format(
classifier.score(X_train, y_train) * 100))
# Evaluate the classifier on the testing set
X_test = vectorizer.transform(text_test_subset.data)
y_test = text_test_subset.target
print("Testing score: {0:.1f}%".format(
classifier.score(X_test, y_test) * 100))
sample-data/web
的目录结构是
sample-data/web
├── de
│ ├── apollo8.txt
│ ├── fiv.txt
│ ├── habichtsadler.txt
└── en
├── elizabeth_needham.txt
├── equipartition_theorem.txt
├── sunderland_echo.txt
└── thespis.txt