找到暗淡 3 的数组。预计估计器 <= 2

Found array with dim 3. Estimator expected <= 2

我正在对一个简单的文档集合使用 LDA。我的目标是提取主题,然后使用提取的主题作为特征来评估我的模型。

我决定使用多项式 SVM 作为评估器。不确定它好不好?

import itertools
from gensim.models import ldamodel
from nltk.tokenize import RegexpTokenizer
from nltk.stem.porter import PorterStemmer
from gensim import corpora, models
from sklearn.naive_bayes import MultinomialNB

tokenizer = RegexpTokenizer(r'\w+')

# create English stop words list
en_stop = {'a'}

# Create p_stemmer of class PorterStemmer
p_stemmer = PorterStemmer()

# create sample documents
doc_a = "Brocolli is good to eat. My brother likes to eat good brocolli, but not my mother."
doc_b = "My mother spends a lot of time driving my brother around to baseball practice."
doc_c = "Some health experts suggest that driving may cause increased tension and blood pressure."
doc_d = "I often feel pressure to perform well at school, but my mother never seems to drive my brother to do better."
doc_e = "Health professionals say that brocolli is good for your health."

# compile sample documents into a list
doc_set = [doc_a, doc_b, doc_c, doc_d, doc_e]

# list for tokenized documents in loop
texts = []

# loop through document list
for i in doc_set:
    # clean and tokenize document string
    raw = i.lower()
    tokens = tokenizer.tokenize(raw)

    # remove stop words from tokens
    stopped_tokens = [i for i in tokens if not i in en_stop]

    # stem tokens
    stemmed_tokens = [p_stemmer.stem(i) for i in stopped_tokens]

    # add tokens to list
    texts.append(stemmed_tokens)

# turn our tokenized documents into a id <-> term dictionary
dictionary = corpora.Dictionary(texts)

# convert tokenized documents into a document-term matrix
corpus = [dictionary.doc2bow(text) for text in texts]


# generate LDA model
#ldamodel = gensim.models.ldamodel.LdaModel(corpus, num_topics=2, id2word=dictionary, passes=20)

id2word = corpora.Dictionary(texts)
# Creates the Bag of Word corpus.
mm = [id2word.doc2bow(text) for text in texts]

# Trains the LDA models.
lda = ldamodel.LdaModel(corpus=mm, id2word=id2word, num_topics=4,
                               update_every=1, chunksize=10000, passes=1)


# Assigns the topics to the documents in corpus
a=[]
lda_corpus = lda[mm]
for i in range(len(doc_set)):
    a.append(lda_corpus[i])
    print(lda_corpus[i])
merged_list = list(itertools.chain(*lda_corpus))
print(a)
    #my_list.append(my_list[i])


sv=MultinomialNB()

yvalues = [0,1,2,3]

sv.fit(a,yvalues)
predictclass = sv.predict(a)

testLables=[0,1,2,3]
from sklearn import metrics, tree
#yacc=metrics.accuracy_score(testLables,predictclass)
#print (yacc)

当我 运行 此代码时,它会抛出主题中提到的错误。

这也是我提供给 SVM 的 LDA 模型(主题文档分布)的输出:

[[(0, 0.95533888404477663), (1, 0.014775921798986477), (2, 0.015161897773308793), (3, 0.014723296382928375)], [(0, 0.019079556242721694), (1, 0.017932434792585779), (2, 0.94498655991579728), (3, 0.018001449048895311)], [(0, 0.017957955483631164), (1, 0.017900184473362918), (2, 0.018133572636989413), (3, 0.9460082874060165)], [(0, 0.96554611572184923), (1, 0.011407838337200715), (2, 0.011537900721487016), (3, 0.011508145219463113)], [(0, 0.023306931039431281), (1, 0.022823706054846005), (2, 0.93072240824085961), (3, 0.023146954664863096)]]

我这里的标签是 0,1,2,3 .

我找到了回复

但是当我写下来的时候:

nsamples, nx, ny = a.shape
d2_train_dataset = a.reshape((nsamples,nx*ny))

根据我的情况,它不起作用。其实一个没有形状的方法。

整个回溯错误

Traceback (most recent call last):
  File "/home/saria/PycharmProjects/TfidfLDA/test3.py", line 87, in <module>
    sv.fit(a,yvalues)
  File "/home/saria/tfwithpython3.6/lib/python3.5/site-packages/sklearn/naive_bayes.py", line 562, in fit
    X, y = check_X_y(X, y, 'csr')
  File "/home/saria/tfwithpython3.6/lib/python3.5/site-packages/sklearn/utils/validation.py", line 521, in check_X_y
    ensure_min_features, warn_on_dtype, estimator)
  File "/home/saria/tfwithpython3.6/lib/python3.5/site-packages/sklearn/utils/validation.py", line 405, in check_array
    % (array.ndim, estimator_name))
ValueError: Found array with dim 3. Estimator expected <= 2.

尝试在 MultinomialNB 上调用 fit 时出现错误,因为 a 中包含的数据大于二维。现在构建的 a 正在为每个文档提供一个元组列表,这是模型不允许的。

由于元组的第一部分只是主题标签,您可以从元组中删除该值并将数据重建为二维矩阵。下面的代码将做到这一点,

new_a = []
new_y = []
for x in a:
    temp_a = []
    sorted_labels = sorted(x, key=lambda x: x[1], reverse=True)
    new_y.append(sorted_labels[0][0])
    for z in x:
        temp_a.append(z[1])
    new_a.append(temp_a)

new_a 将是文档列表,其中每个文档将包含主题 0、1、2 和 3 的分数。然后您可以调用 sv.fit(new_a, yvalues) 来拟合您的模型。