Python: gensim: RuntimeError: you must first build vocabulary before training the model

Python: gensim: RuntimeError: you must first build vocabulary before training the model

我知道这个问题已经被问过了,但我仍然找不到解决方案。

我想在自定义数据集上使用 gensim 的 word2vec,但现在我仍在弄清楚数据集必须采用何种格式。我查看了 this post,其中输入基本上是一个列表列表(一个大列表包含其他列表,这些列表是来自 NLTK Brown 语料库的标记化句子)。所以我认为这是我必须用于命令 word2vec.Word2Vec() 的输入格式。但是,它不适用于我的小测试集,我不明白为什么。

我尝试过的:

有效:

from gensim.models import word2vec
from nltk.corpus import brown
import logging
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)

brown_vecs = word2vec.Word2Vec(brown.sents())

这没用:

sentences = [ "the quick brown fox jumps over the lazy dogs","yoyoyo you go home now to sleep"]
vocab = [s.encode('utf-8').split() for s in sentences]
voc_vec = word2vec.Word2Vec(vocab)

我不明白为什么它不适用于 "mock" 数据,即使它与 Brown 语料库中的句子具有相同的数据结构:

词汇:

[['the', 'quick', 'brown', 'fox', 'jumps', 'over', 'the', 'lazy', 'dogs'], ['yoyoyo', 'you', 'go', 'home', 'now', 'to', 'sleep']]

brown.sents(): (开头)

[['The', 'Fulton', 'County', 'Grand', 'Jury', 'said', 'Friday', 'an', 'investigation', 'of', "Atlanta's", 'recent', 'primary', 'election', 'produced', '``', 'no', 'evidence', "''", 'that', 'any', 'irregularities', 'took', 'place', '.'], ['The', 'jury', 'further', 'said', 'in', 'term-end', 'presentments', 'that', 'the', 'City', 'Executive', 'Committee', ',', 'which', 'had', 'over-all', 'charge', 'of', 'the', 'election', ',', '``', 'deserves', 'the', 'praise', 'and', 'thanks', 'of', 'the', 'City', 'of', 'Atlanta', "''", 'for', 'the', 'manner', 'in', 'which', 'the', 'election', 'was', 'conducted', '.'], ...]

谁能告诉我我做错了什么?

gensim 的 Word2Vec 中的默认 min_count 设置为 5。如果您的词汇表中没有频率大于 4 的单词,则您的词汇表将为空,因此会出现错误。尝试

voc_vec = word2vec.Word2Vec(vocab, min_count=1)

gensim 的 Word2Vec 的输入可以是句子列表或单词列表或句子列表。

例如

1. sentences = ['I love ice-cream', 'he loves ice-cream', 'you love ice cream']
2. words = ['i','love','ice - cream', 'like', 'ice-cream']
3. sentences = [['i love ice-cream'], ['he loves ice-cream'], ['you love ice cream']]

训练前构建词汇表

model.build_vocab(sentences, update=False)

just check out the link for detailed info