NLTK TypeError: unhashable type: 'list'
NLTK TypeError: unhashable type: 'list'
我目前正在对 csv 文件中的单词进行词形还原,之后我将所有单词都以小写字母传递,删除所有标点符号并拆分列。
我只使用两个 CSV 列:analyze.info()
:
<class 'pandas.core.frame.DataFrame'> RangeIndex: 4637 entries, 0 to 4636. Data columns (total 2 columns):
# Column Non-Null Count Dtype
0 Comments 4637 non-null object
1 Classification 4637 non-null object
import string
import pandas as pd
from nltk.corpus import stopwords
from nltk.stem import
analyze = pd.read_csv('C:/Users/(..)/Talk London/ALL_dataset.csv', delimiter=';', low_memory=False, encoding='cp1252', usecols=['Comments', 'Classification'])
lower_case = analyze['Comments'].str.lower()
cleaned_text = lower_case.str.translate(str.maketrans('', '', string.punctuation))
tokenized_words = cleaned_text.str.split()
final_words = []
for word in tokenized_words:
if word not in stopwords.words('english'):
final_words.append(word)
wnl = WordNetLemmatizer()
lemma_words = []
lem = ' '.join([wnl.lemmatize(word) for word in tokenized_words])
lemma_words.append(lem)
当我运行代码return出现这个错误:
Traceback (most recent call last):
File "C:/Users/suiso/PycharmProjects/SA_working/SA_Main.py", line 52, in
lem = ' '.join([wnl.lemmatize(word) for word in tokenized_words])
File "C:/Users/suiso/PycharmProjects/SA_working/SA_Main.py", line 52, in
lem = ' '.join([wnl.lemmatize(word) for word in tokenized_words])
File "C:\Users\suiso\PycharmProjects\SA_working\venv\lib\site-packages\nltk\stem\wordnet.py", line 38, in lemmatize
lemmas = wordnet._morphy(word, pos)
File "C:\Users\suiso\PycharmProjects\SA_working\venv\lib\site-packages\nltk\corpus\reader\wordnet.py", line 1897, in _morphy
if form in exceptions:
TypeError: unhashable type: 'list'
tokenized_words
是一列列表。它不是一列字符串的原因是因为您使用了 split
方法。所以你需要像这样使用双for循环
lem = ' '.join([wnl.lemmatize(word) for word_list in tokenized_words for word in word_list])
我目前正在对 csv 文件中的单词进行词形还原,之后我将所有单词都以小写字母传递,删除所有标点符号并拆分列。
我只使用两个 CSV 列:analyze.info()
:
<class 'pandas.core.frame.DataFrame'> RangeIndex: 4637 entries, 0 to 4636. Data columns (total 2 columns):
# Column Non-Null Count Dtype
0 Comments 4637 non-null object
1 Classification 4637 non-null object
import string
import pandas as pd
from nltk.corpus import stopwords
from nltk.stem import
analyze = pd.read_csv('C:/Users/(..)/Talk London/ALL_dataset.csv', delimiter=';', low_memory=False, encoding='cp1252', usecols=['Comments', 'Classification'])
lower_case = analyze['Comments'].str.lower()
cleaned_text = lower_case.str.translate(str.maketrans('', '', string.punctuation))
tokenized_words = cleaned_text.str.split()
final_words = []
for word in tokenized_words:
if word not in stopwords.words('english'):
final_words.append(word)
wnl = WordNetLemmatizer()
lemma_words = []
lem = ' '.join([wnl.lemmatize(word) for word in tokenized_words])
lemma_words.append(lem)
当我运行代码return出现这个错误:
Traceback (most recent call last):
File "C:/Users/suiso/PycharmProjects/SA_working/SA_Main.py", line 52, in lem = ' '.join([wnl.lemmatize(word) for word in tokenized_words])
File "C:/Users/suiso/PycharmProjects/SA_working/SA_Main.py", line 52, in lem = ' '.join([wnl.lemmatize(word) for word in tokenized_words])
File "C:\Users\suiso\PycharmProjects\SA_working\venv\lib\site-packages\nltk\stem\wordnet.py", line 38, in lemmatize lemmas = wordnet._morphy(word, pos)
File "C:\Users\suiso\PycharmProjects\SA_working\venv\lib\site-packages\nltk\corpus\reader\wordnet.py", line 1897, in _morphy
if form in exceptions:
TypeError: unhashable type: 'list'
tokenized_words
是一列列表。它不是一列字符串的原因是因为您使用了 split
方法。所以你需要像这样使用双for循环
lem = ' '.join([wnl.lemmatize(word) for word_list in tokenized_words for word in word_list])