计算一个给定单词和随机单词列表之间的相似度

calculate similarity between one given word and a RANDOM list of words

我想计算给定的一个单词和一个随机单词列表之间的相似度,然后将结果在一个新列表中排名,例如:

list = ['bark','black','cat','bite','human','book'] #it could be another list

与单词相似:

word = ['dog']

--

import spacy
nlp = spacy.load('en_core_web_md')


bark = nlp("bark")
bite = nlp("bite")
human = nlp("human")
book = nlp("book")
cat = nlp("cat")
black = nlp("black")

print("dog - bark", dog.similarity(bark)) #0.4258176903285793
print("dog - bite", dog.similarity(bite)) #0.4781574605069981
print("dog - human", dog.similarity(human)) #0.35814872466230835
print("dog - book", dog.similarity(book)) #0.22838638167627964
print("dog - cat", dog.similarity(cat)) #0.8016854705531046
print("dog - black", dog.similarity(black)) #0.30601667459001575

那么如何自动计算列表中每个单词与给定单词的相似度?

你可以这样做:

import spacy
nlp = spacy.load('en_core_web_md')

words = ['bark','black','cat','bite','human','book']
word = 'dog'
word_nlp = nlp(word)

new_words = [(w, word_nlp.similarity(nlp(w))) for w in words]
new_words.sort(key=lambda x: x[1], reverse=True)

for w, value in new_words:
    print(f"{word} - {w}", value)