在 nltk 文本蕴含分类器上获取 AttributeError

Getting AttributeError on nltk Textual entailment classifier

我指的是部分中的 link http://www.nltk.org/book/ch06.html#recognizing-textual-entailment

def rte_features(rtepair):
    extractor = nltk.RTEFeatureExtractor(rtepair)
    features = {}
    features['word_overlap'] = len(extractor.overlap('word'))
    features['word_hyp_extra'] = len(extractor.hyp_extra('word'))
    features['ne_overlap'] = len(extractor.overlap('ne'))
    features['ne_hyp_extra'] = len(extractor.hyp_extra('ne'))
    return features
rtepair = nltk.corpus.rte.pairs(['rte3_dev.xml'])

extractor = nltk.RTEFeatureExtractor(rtepair)
---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-39-a7f96e33ba9e> in <module>()
----> 1 extractor = nltk.RTEFeatureExtractor(rtepair)

C:\Users\RAVINA\Anaconda2\lib\site-packages\nltk\classify\rte_classify.pyc in __init__(self, rtepair, stop, lemmatize)
     65 
     66         #Get the set of word types for text and hypothesis
---> 67         self.text_tokens = tokenizer.tokenize(rtepair.text)
     68         self.hyp_tokens = tokenizer.tokenize(rtepair.hyp)
     69         self.text_words = set(self.text_tokens)

AttributeError: 'list' object has no attribute 'text'

它是书中提到的确切代码,谁能帮我看看这里出了什么问题。 谢谢 拉维纳

看看类型签名。在 python shell:

中输入
import nltk
x = nltk.corpus.rte.pairs(['rte3_dev.xml'])
type(x)

告诉你 x 是列表类型。

现在,输入:

help(nltk.RTEFeatureExtractor)

这告诉你:

:param rtepair: a RTEPair from which features should be extracted

显然,x 没有调用 nltk.RTEFeatureExtractor 的正确类型。相反:

type(x[33])
<class 'nltk.corpus.reader.rte.RTEPair'>

列表中的单个项目确实具有正确的类型。


更新: 如评论部分所述,extractor.text_words 仅显示空字符串。这似乎是由于自编写文档以来 NLTK 中所做的更改。长话短说:如果不降级到旧版本的 NLTK 或自己修复 NLTK 中的问题,您将无法修复此问题。 在文件 nltk/classify/rte_classify.py 中,您将找到以下代码段:

class RTEFeatureExtractor(object):
    …
    import nltk
    from nltk.tokenize import RegexpTokenizer
    tokenizer = RegexpTokenizer('([A-Z]\.)+|\w+|$[\d\.]+')
    self.text_tokens = tokenizer.tokenize(rtepair.text)
    self.text_words = set(self.text_tokens)

如果您 运行 与提取器中的确切文本相同 RegexpTokenizer,它将只产生空字符串:

import nltk
rtepair = nltk.corpus.rte.pairs(['rte3_dev.xml'])[33]
from nltk.tokenize import RegexpTokenizer
tokenizer = RegexpTokenizer('([A-Z]\.)+|\w+|$[\d\.]+')
tokenizer.tokenize(rtepair.text)

Returns ['', '', …, ''](即空字符串列表)。