WordNetlemmatizer 错误 - 所有字母都被词形还原
WordNetlemmatizer error - all alphabets are lemmatized
我正在尝试对我的数据集进行词形还原以进行情绪分析 - 我应该怎么做才能获得预期输出而不是当前输出?输入文件是一个 csv - 存储为 DataFrame 对象。
dataset = pd.read_csv('xyz.csv')
这是我的代码
from nltk.stem import WordNetLemmatizer
lemmatizer = WordNetLemmatizer()
list1_ = []
for file_ in dataset:
result1 = dataset['Content'].apply(lambda x: [lemmatizer.lemmatize(y) for y in x])
list1_.append(result1)
dataset = pd.concat(list1_, ignore_index=True)
预计
>> lemmatizer.lemmatize('cats')
>> [cat]
当前输出
>> lemmatizer.lemmatize('cats')
>> [c,a,t,s]
TL;DR
result1 = dataset['Content'].apply(lambda x: [lemmatizer.lemmatize(y) for y in x.split()])
Lemmatizer 接受任何字符串作为输入。
如果 dataset['Content']
列是字符串,遍历字符串将遍历非 "words" 的字符,例如
>>> from nltk.stem import WordNetLemmatizer
>>> wnl = WordNetLemmatizer()
>>> x = 'this is a foo bar sentence, that is of type str'
>>> [wnl.lemmatize(ch) for ch in x]
['t', 'h', 'i', 's', ' ', 'i', 's', ' ', 'a', ' ', 'f', 'o', 'o', ' ', 'b', 'a', 'r', ' ', 's', 'e', 'n', 't', 'e', 'n', 'c', 'e', ',', ' ', 't', 'h', 'a', 't', ' ', 'i', 's', ' ', 'o', 'f', ' ', 't', 'y', 'p', 'e', ' ', 's', 't', 'r']
所以你必须首先对你的句子字符串进行单词标记,例如:
>>> from nltk import word_tokenize
>>> [wnl.lemmatize(word) for word in x.split()]
['this', 'is', 'a', 'foo', 'bar', 'sentence,', 'that', 'is', 'of', 'type', 'str']
>>> [wnl.lemmatize(ch) for ch in word_tokenize(x)]
['this', 'is', 'a', 'foo', 'bar', 'sentence', ',', 'that', 'is', 'of', 'type', 'str']
另一个例子
>>> from nltk import word_tokenize
>>> x = 'the geese ran through the parks'
>>> [wnl.lemmatize(word) for word in x.split()]
['the', u'goose', 'ran', 'through', 'the', u'park']
>>> [wnl.lemmatize(ch) for ch in word_tokenize(x)]
['the', u'goose', 'ran', 'through', 'the', u'park']
但要获得更准确的词形还原,您应该对句子单词进行标记化和后置标记,参见 https://github.com/alvations/earthy/blob/master/FAQ.md#how-to-use-default-nltk-functions-in-earthy
我正在尝试对我的数据集进行词形还原以进行情绪分析 - 我应该怎么做才能获得预期输出而不是当前输出?输入文件是一个 csv - 存储为 DataFrame 对象。
dataset = pd.read_csv('xyz.csv')
这是我的代码
from nltk.stem import WordNetLemmatizer
lemmatizer = WordNetLemmatizer()
list1_ = []
for file_ in dataset:
result1 = dataset['Content'].apply(lambda x: [lemmatizer.lemmatize(y) for y in x])
list1_.append(result1)
dataset = pd.concat(list1_, ignore_index=True)
预计
>> lemmatizer.lemmatize('cats')
>> [cat]
当前输出
>> lemmatizer.lemmatize('cats')
>> [c,a,t,s]
TL;DR
result1 = dataset['Content'].apply(lambda x: [lemmatizer.lemmatize(y) for y in x.split()])
Lemmatizer 接受任何字符串作为输入。
如果 dataset['Content']
列是字符串,遍历字符串将遍历非 "words" 的字符,例如
>>> from nltk.stem import WordNetLemmatizer
>>> wnl = WordNetLemmatizer()
>>> x = 'this is a foo bar sentence, that is of type str'
>>> [wnl.lemmatize(ch) for ch in x]
['t', 'h', 'i', 's', ' ', 'i', 's', ' ', 'a', ' ', 'f', 'o', 'o', ' ', 'b', 'a', 'r', ' ', 's', 'e', 'n', 't', 'e', 'n', 'c', 'e', ',', ' ', 't', 'h', 'a', 't', ' ', 'i', 's', ' ', 'o', 'f', ' ', 't', 'y', 'p', 'e', ' ', 's', 't', 'r']
所以你必须首先对你的句子字符串进行单词标记,例如:
>>> from nltk import word_tokenize
>>> [wnl.lemmatize(word) for word in x.split()]
['this', 'is', 'a', 'foo', 'bar', 'sentence,', 'that', 'is', 'of', 'type', 'str']
>>> [wnl.lemmatize(ch) for ch in word_tokenize(x)]
['this', 'is', 'a', 'foo', 'bar', 'sentence', ',', 'that', 'is', 'of', 'type', 'str']
另一个例子
>>> from nltk import word_tokenize
>>> x = 'the geese ran through the parks'
>>> [wnl.lemmatize(word) for word in x.split()]
['the', u'goose', 'ran', 'through', 'the', u'park']
>>> [wnl.lemmatize(ch) for ch in word_tokenize(x)]
['the', u'goose', 'ran', 'through', 'the', u'park']
但要获得更准确的词形还原,您应该对句子单词进行标记化和后置标记,参见 https://github.com/alvations/earthy/blob/master/FAQ.md#how-to-use-default-nltk-functions-in-earthy