问:Python 使用 NLTK 的拼写检查器
Q : Python Spell Checker using NLTK
所以我有这行代码使用 NLTK 库
def autospell(text):
spells = [spell(w) for w in (nltk.word_tokenize(text))]
return " ".join(spells)
train_data['Phrase'][:200].apply(autospell)
我收到此错误消息,告诉我名称拼写未定义,我不知道那是什么意思,因为我认为它来自 NLTK 库,还是我遗漏了什么地方?
NameError Traceback (most recent call last)
<ipython-input-119-582bf5662c88> in <module>()
5 spells = [spell(w) for w in (nltk.word_tokenize(text))]
6 return " ".join(spells)
----> 7 train_data['Phrase'][:200].apply(autospell)
2 frames
pandas/_libs/lib.pyx in pandas._libs.lib.map_infer()
<ipython-input-119-582bf5662c88> in <listcomp>(.0)
3 correct the spelling of the word.
4 """
----> 5 spells = [spell(w) for w in (nltk.word_tokenize(text))]
6 return " ".join(spells)
7 train_data['Phrase'][:200].apply(autospell)
NameError: name 'spell' is not defined
查看 Spell Checker for Python,您可能应该使用 autocorrect
库。
示例代码:
from autocorrect import Speller
spell = Speller(lang='en')
def autospell(text):
spells = [spell(w) for w in (nltk.word_tokenize(text))]
return " ".join(spells)
train_data['Phrase'][:200].apply(autospell)
所以我有这行代码使用 NLTK 库
def autospell(text):
spells = [spell(w) for w in (nltk.word_tokenize(text))]
return " ".join(spells)
train_data['Phrase'][:200].apply(autospell)
我收到此错误消息,告诉我名称拼写未定义,我不知道那是什么意思,因为我认为它来自 NLTK 库,还是我遗漏了什么地方?
NameError Traceback (most recent call last)
<ipython-input-119-582bf5662c88> in <module>()
5 spells = [spell(w) for w in (nltk.word_tokenize(text))]
6 return " ".join(spells)
----> 7 train_data['Phrase'][:200].apply(autospell)
2 frames
pandas/_libs/lib.pyx in pandas._libs.lib.map_infer()
<ipython-input-119-582bf5662c88> in <listcomp>(.0)
3 correct the spelling of the word.
4 """
----> 5 spells = [spell(w) for w in (nltk.word_tokenize(text))]
6 return " ".join(spells)
7 train_data['Phrase'][:200].apply(autospell)
NameError: name 'spell' is not defined
查看 Spell Checker for Python,您可能应该使用 autocorrect
库。
示例代码:
from autocorrect import Speller
spell = Speller(lang='en')
def autospell(text):
spells = [spell(w) for w in (nltk.word_tokenize(text))]
return " ".join(spells)
train_data['Phrase'][:200].apply(autospell)