AttributeError: 'unicode' object has no attribute 'wup_similarity'

AttributeError: 'unicode' object has no attribute 'wup_similarity'

我正在玩 Python 2.7 中的 nltk 模块。以下是我的代码

from nltk.corpus import wordnet as wn

listsyn1 = []
listsyn2 = []

for synset in wn.synsets('dog', pos=wn.NOUN):
    print synset.name()
    for lemma in synset.lemmas():
        listsyn1.append(lemma.name())

for synset in wn.synsets('paw', pos=wn.NOUN):
    print synset.name()
    for lemma in synset.lemmas():
        listsyn2.append(lemma.name())

countsyn1 = len(listsyn1)
countsyn2 = len(listsyn2)

sumofsimilarity = 0;
for firstgroup in listsyn1:
    for secondgroup in listsyn2:
        print(firstgroup.wup_similarity(secondgroup))
        sumofsimilarity = sumofsimilarity + firstgroup.wup_similarity(secondgroup)

averageofsimilarity = sumofsimilarity/(countsyn1*countsyn2)

当我尝试 运行 这段代码时出现错误 "AttributeError: 'unicode' object has no attribute 'wup_similarity'"。谢谢你的帮助。

相似性度量只能由 Synset 对象而不是 Lemmalemma_names(即 str 类型)访问。

dog = wn.synsets('dog', 'n')[0]
paw = wn.synsets('paw', 'n')[0]

print(type(dog), type(paw), dog.wup_similarity(paw))

[输出]:

<class 'nltk.corpus.reader.wordnet.Synset'> <class 'nltk.corpus.reader.wordnet.Synset'> 0.21052631578947367

当您获得 .lemmas() 并从 Synset 对象访问 .names() 属性时,您将获得 str:

dog = wn.synsets('dog', 'n')[0]
print(type(dog), dog)
print(type(dog.lemmas()[0]), dog.lemmas()[0])
print(type(dog.lemmas()[0].name()), dog.lemmas()[0].name())

[输出]:

<class 'nltk.corpus.reader.wordnet.Synset'> Synset('dog.n.01')
<class 'nltk.corpus.reader.wordnet.Lemma'> Lemma('dog.n.01.dog')
<class 'str'> dog

您可以使用hasattr函数来检查objects/types可以访问某个函数或属性:

dog = wn.synsets('dog', 'n')[0]
print(hasattr(dog, 'wup_similarity'))
print(hasattr(dog.lemmas()[0], 'wup_similarity'))
print(hasattr(dog.lemmas()[0].name(), 'wup_similarity'))

[输出]:

True
False
False

很可能,您想要一个与 https://github.com/alvations/pywsd/blob/master/pywsd/similarity.py#L76 类似的函数,它可以最大化两个同义词集的 wup_similarity,但请注意,有许多注意事项,例如必要的预词形化。

所以我认为这就是您想通过使用 .lemma_names() 来避免它的地方。也许,你可以这样做:

def ss_lnames(word):
    return set(chain(*[ss.lemma_names() for ss in wn.synsets(word, 'n')]))

dog_lnames = ss_lnames('dog')
paw_lnames = ss_lnames('paw')

for dog_name, paw_name in product(dog_lnames, paw_lnames):
    for dog_ss, paw_ss in product(wn.synsets(dog_name, 'n'), wn.synsets(paw_name, 'n')):
        print(dog_ss, paw_ss, dog_ss.wup_similarity(paw_ss))  

但很可能结果是无法解释和不可靠的,因为在外循环和内循环中的同义词集查找机器人之前没有进行词义消歧。