在 Python 中训练 TBL POS Tagger 时出现内存错误

Memory Error when train TBL POS Tagger in Python

当我尝试训练一个有 40K 个句子的语料库时,没有问题。但是当我训练 86K 句子时,我得到这样的错误:

ERROR:root:
Traceback (most recent call last):
  File "CLC_POS_train.py", line 95, in main
    train(sys.argv[10], encoding, flag_tagger, k, percent, eval_flag)
  File "CLC_POS_train.py", line 49, in train
    CLC_POS.process('TBL', train_data, test_data, flag_evaluate[1], flag_dump[1], 'pos_tbl.model' + postfix)
  File "d:\WORKing\VCL\TEST\CongToan_POS\Source\CLC_POS.py", line 184, in process
    tagger = CLC_POS.train_tbl(train_data)
  File "d:\WORKing\VCL\TEST\CongToan_POS\Source\CLC_POS.py", line 71, in train_tbl
    tbl_tagger = brill_trainer.BrillTaggerTrainer.train(trainer, train_data, max_rules=1000, min_score=3)
  File "C:\Python34\lib\site-packages\nltk-3.1-py3.4.egg\nltk\tag\brill_trainer.py", line 274, in train
    self._init_mappings(test_sents, train_sents)
  File "C:\Python34\lib\site-packages\nltk-3.1-py3.4.egg\nltk\tag\brill_trainer.py", line 341, in _init_mappings
    self._tag_positions[tag].append((sentnum, wordnum))
MemoryError
INFO:root:

我已经在 Windows 64 位中使用了 Python 3.5,但仍然出现此错误。 这是用于训练的代码:

t0 = RegexpTagger(MyRegexp.create_regexp_tagger())
t1 = nltk.UnigramTagger(train_data, backoff=t0)
t2 = nltk.BigramTagger(train_data, backoff=t1)
trainer = brill_trainer.BrillTaggerTrainer(t2, brill.fntbl37())
tbl_tagger = brill_trainer.BrillTaggerTrainer.train(trainer, train_data, max_rules=1000, min_score=3)

发生这种情况是因为您的 PC 内存不足。 训练大型语料库时,需要大量内存。 安装更多内存,然后你就可以完成它。