AttributeError: 'list' object has no attribute '_all_hypernyms' what is this error?
AttributeError: 'list' object has no attribute '_all_hypernyms' what is this error?
这个程序是为了找出句子和单词之间的相似性
以及它们在同义词上的相似之处
我已经下载了nltk
当我第一次编码时它是 运行 并且没有错误但是几天后当我 运行 程序 ti 给我这个错误 AttributeError: 'list' object has no attribute '_all_hypernyms'
错误是因为这个 wn.wup_similarity
import nltk
from nltk.tokenize import word_tokenize
from nltk.corpus import stopwords
from nltk.corpus import wordnet as wn
database=[]
main_sen=[]
words=[]
range_is=[0.78]
word_check=[0.1]
main_sentence="the world in ending with the time is called hello"
database_word=["known","complete"]
stopwords = stopwords.words('english')
words = word_tokenize(main_sentence)
filtered_sentences = []
for word in words:
if word not in stopwords:
filtered_sentences.append(word)
print (filtered_sentences)
for databasewords in database_word:
database.append(wn.synsets(databasewords)[0])
print(database)
for sentences in filtered_sentences:
main_sen.append(wn.synsets(sentences))
print(main_sen)
# Error is in below lines
for data in database:
for sen in main_sen :
word_check.append(wn.wup_similarity(data,sen))
if word_check >range_is:
count = +1
print(count)
所以我不确定你试图用这段代码实现什么,但我明白为什么代码被破坏了,正在发生的事情是在破坏的行中:
word_check.append(wn.wup_similarity(data[0], sen[0]))
您正在比较两个同义词集列表,这是有问题的,因为该函数无法处理同义词集列表。
因此,如果您想比较所有创建的同义词集,您将需要使用额外的 for 循环来提取列表中的所有同义词集。
但是,如果您想比较第一个单词的第一个同义词集和第二个单词的第一个同义词集,您可以简单地这样做
import nltk
from nltk.tokenize import word_tokenize
from nltk.corpus import stopwords
from nltk.corpus import wordnet as wn
database=[]
main_sen=[]
words=[]
range_is=[0.78]
word_check=[0.1]
main_sentence="the world in ending with the time is called hello"
database_word=["known","complete"]
stopwords = stopwords.words('english')
words = word_tokenize(main_sentence)
filtered_sentences = []
for word in words:
if word not in stopwords:
filtered_sentences.append(word)
print (filtered_sentences)
for databasewords in database_word:
database.append(wn.synsets(databasewords)[0])
print(database)
for sentences in filtered_sentences:
main_sen.append(wn.synsets(sentences)[0])
print(main_sen)
# Error is in below lines
for data in database:
for sen in main_sen :
word_check.append(wn.wup_similarity(data,sen))
if word_check >range_is:
count = +1
print(count)
您还需要小心 word_check > range_is 的 if 语句,因为您正在将列表与值进行比较,同时您应该检查相似性结果本身
if wn.wup_similarity(data, sen) > range_is:
count = +1
这个程序是为了找出句子和单词之间的相似性
以及它们在同义词上的相似之处
我已经下载了nltk
当我第一次编码时它是 运行 并且没有错误但是几天后当我 运行 程序 ti 给我这个错误 AttributeError: 'list' object has no attribute '_all_hypernyms'
错误是因为这个 wn.wup_similarity
import nltk
from nltk.tokenize import word_tokenize
from nltk.corpus import stopwords
from nltk.corpus import wordnet as wn
database=[]
main_sen=[]
words=[]
range_is=[0.78]
word_check=[0.1]
main_sentence="the world in ending with the time is called hello"
database_word=["known","complete"]
stopwords = stopwords.words('english')
words = word_tokenize(main_sentence)
filtered_sentences = []
for word in words:
if word not in stopwords:
filtered_sentences.append(word)
print (filtered_sentences)
for databasewords in database_word:
database.append(wn.synsets(databasewords)[0])
print(database)
for sentences in filtered_sentences:
main_sen.append(wn.synsets(sentences))
print(main_sen)
# Error is in below lines
for data in database:
for sen in main_sen :
word_check.append(wn.wup_similarity(data,sen))
if word_check >range_is:
count = +1
print(count)
所以我不确定你试图用这段代码实现什么,但我明白为什么代码被破坏了,正在发生的事情是在破坏的行中:
word_check.append(wn.wup_similarity(data[0], sen[0]))
您正在比较两个同义词集列表,这是有问题的,因为该函数无法处理同义词集列表。
因此,如果您想比较所有创建的同义词集,您将需要使用额外的 for 循环来提取列表中的所有同义词集。
但是,如果您想比较第一个单词的第一个同义词集和第二个单词的第一个同义词集,您可以简单地这样做
import nltk
from nltk.tokenize import word_tokenize
from nltk.corpus import stopwords
from nltk.corpus import wordnet as wn
database=[]
main_sen=[]
words=[]
range_is=[0.78]
word_check=[0.1]
main_sentence="the world in ending with the time is called hello"
database_word=["known","complete"]
stopwords = stopwords.words('english')
words = word_tokenize(main_sentence)
filtered_sentences = []
for word in words:
if word not in stopwords:
filtered_sentences.append(word)
print (filtered_sentences)
for databasewords in database_word:
database.append(wn.synsets(databasewords)[0])
print(database)
for sentences in filtered_sentences:
main_sen.append(wn.synsets(sentences)[0])
print(main_sen)
# Error is in below lines
for data in database:
for sen in main_sen :
word_check.append(wn.wup_similarity(data,sen))
if word_check >range_is:
count = +1
print(count)
您还需要小心 word_check > range_is 的 if 语句,因为您正在将列表与值进行比较,同时您应该检查相似性结果本身
if wn.wup_similarity(data, sen) > range_is:
count = +1