为什么 "got " 的词干仍然是 "got" 而不是 "get"?

Why the stem of word "got " is still "got" instead of "get"?

from stemming.porter2 import stem

documents = ['got',"get"]

documents = [[stem(word) for word in sentence.split(" ")] for sentence in documents]
print(documents)

结果是:

[['got'], ['get']]

有人可以帮忙解释一下吗? 谢谢!

你想要的是词形还原器而不是词干分析器。区别很微妙。

通常,词干分析器会尽可能多地删除后缀,并且在某些情况下会为无法通过简单地删除后缀找到规范化形式的单词处理异常单词列表。

词形还原器试图找到一个词的 "basic"/root/infinitive 形式,通常,它需要针对不同语言的专门规则。

  • what is the true difference between lemmatization vs stemming?
  • Stemmers vs Lemmatizers

使用 morphy 词形还原器的 NLTK 实现的词形还原需要正确的词性 (POS) 标记相当准确。

避免(或者实际上从不)尝试孤立地对单个词进行词形还原。尝试对一个完全词性标记的句子进行词形还原,例如

from nltk import word_tokenize, pos_tag
from nltk import wordnet as wn

def penn2morphy(penntag, returnNone=False, default_to_noun=False):
    morphy_tag = {'NN':wn.NOUN, 'JJ':wn.ADJ,
                  'VB':wn.VERB, 'RB':wn.ADV}
    try:
        return morphy_tag[penntag[:2]]
    except:
        if returnNone:
            return None
        elif default_to_noun:
            return 'n'
        else:
            return ''

使用 penn2morphy 辅助函数,您需要将 POS 标签从 pos_tag() 转换为 morphy 标签,然后您可以:

>>> from nltk.stem import WordNetLemmatizer
>>> wnl = WordNetLemmatizer()
>>> sent = "He got up in bed at 8am."
>>> [(token, penn2morphy(tag)) for token, tag in pos_tag(word_tokenize(sent))]
[('He', ''), ('got', 'v'), ('up', ''), ('in', ''), ('bed', 'n'), ('at', ''), ('8am', ''), ('.', '')]
>>> [wnl.lemmatize(token, pos=penn2morphy(tag, default_to_noun=True)) for token, tag in pos_tag(word_tokenize(sent))]
['He', 'get', 'up', 'in', 'bed', 'at', '8am', '.']

为方便起见,您也可以尝试 pywsd lemmatizer

>>> from pywsd.utils import lemmatize_sentence
Warming up PyWSD (takes ~10 secs)... took 7.196984529495239 secs.
>>> sent = "He got up in bed at 8am."
>>> lemmatize_sentence(sent)
['he', 'get', 'up', 'in', 'bed', 'at', '8am', '.']

另见