Gensim: TypeError: doc2bow expects an array of unicode tokens on input, not a single string
Gensim: TypeError: doc2bow expects an array of unicode tokens on input, not a single string
我从一些 python 任务开始,我在使用 gensim 时遇到问题。我正在尝试从我的磁盘加载文件并处理它们(拆分它们并小写()它们)
我的代码如下:
dictionary_arr=[]
for file_path in glob.glob(os.path.join(path, '*.txt')):
with open (file_path, "r") as myfile:
text=myfile.read()
for words in text.lower().split():
dictionary_arr.append(words)
dictionary = corpora.Dictionary(dictionary_arr)
列表 (dictionary_arr) 包含所有文件中所有单词的列表,然后我使用 gensim corpora.Dictionary 来处理该列表。但是我遇到了一个错误。
TypeError: doc2bow expects an array of unicode tokens on input, not a single string
我不明白有什么问题,将不胜感激。
在dictionary.py中,初始化函数为:
def __init__(self, documents=None):
self.token2id = {} # token -> tokenId
self.id2token = {} # reverse mapping for token2id; only formed on request, to save memory
self.dfs = {} # document frequencies: tokenId -> in how many documents this token appeared
self.num_docs = 0 # number of documents processed
self.num_pos = 0 # total number of corpus positions
self.num_nnz = 0 # total number of non-zeroes in the BOW matrix
if documents is not None:
self.add_documents(documents)
函数add_documents根据文档集合构建字典。每个文档都是一个列表
代币数量:
def add_documents(self, documents):
for docno, document in enumerate(documents):
if docno % 10000 == 0:
logger.info("adding document #%i to %s" % (docno, self))
_ = self.doc2bow(document, allow_update=True) # ignore the result, here we only care about updating token ids
logger.info("built %s from %i documents (total %i corpus positions)" %
(self, self.num_docs, self.num_pos))
因此,如果您以这种方式初始化 Dictionary,则必须传递文档而不是单个文档。例如,
dic = corpora.Dictionary([a.split()])
可以。
Dictionary 需要一个标记化的字符串作为输入:
dataset = ['driving car ',
'drive car carefully',
'student and university']
# be sure to split sentence before feed into Dictionary
dataset = [d.split() for d in dataset]
vocab = Dictionary(dataset)
Hello everyone i ran into the same problem. This is what worked for me
#Tokenize the sentence into words
tokens = [word for word in sentence.split()]
#Create dictionary
dictionary = corpora.Dictionary([tokens])
print(dictionary)
我从一些 python 任务开始,我在使用 gensim 时遇到问题。我正在尝试从我的磁盘加载文件并处理它们(拆分它们并小写()它们)
我的代码如下:
dictionary_arr=[]
for file_path in glob.glob(os.path.join(path, '*.txt')):
with open (file_path, "r") as myfile:
text=myfile.read()
for words in text.lower().split():
dictionary_arr.append(words)
dictionary = corpora.Dictionary(dictionary_arr)
列表 (dictionary_arr) 包含所有文件中所有单词的列表,然后我使用 gensim corpora.Dictionary 来处理该列表。但是我遇到了一个错误。
TypeError: doc2bow expects an array of unicode tokens on input, not a single string
我不明白有什么问题,将不胜感激。
在dictionary.py中,初始化函数为:
def __init__(self, documents=None):
self.token2id = {} # token -> tokenId
self.id2token = {} # reverse mapping for token2id; only formed on request, to save memory
self.dfs = {} # document frequencies: tokenId -> in how many documents this token appeared
self.num_docs = 0 # number of documents processed
self.num_pos = 0 # total number of corpus positions
self.num_nnz = 0 # total number of non-zeroes in the BOW matrix
if documents is not None:
self.add_documents(documents)
函数add_documents根据文档集合构建字典。每个文档都是一个列表 代币数量:
def add_documents(self, documents):
for docno, document in enumerate(documents):
if docno % 10000 == 0:
logger.info("adding document #%i to %s" % (docno, self))
_ = self.doc2bow(document, allow_update=True) # ignore the result, here we only care about updating token ids
logger.info("built %s from %i documents (total %i corpus positions)" %
(self, self.num_docs, self.num_pos))
因此,如果您以这种方式初始化 Dictionary,则必须传递文档而不是单个文档。例如,
dic = corpora.Dictionary([a.split()])
可以。
Dictionary 需要一个标记化的字符串作为输入:
dataset = ['driving car ',
'drive car carefully',
'student and university']
# be sure to split sentence before feed into Dictionary
dataset = [d.split() for d in dataset]
vocab = Dictionary(dataset)
Hello everyone i ran into the same problem. This is what worked for me
#Tokenize the sentence into words
tokens = [word for word in sentence.split()]
#Create dictionary
dictionary = corpora.Dictionary([tokens])
print(dictionary)