Nltk:从列表列表中消除停用词
Nltk: Eliminating stop words from list of list
我正在尝试删除停用词并尝试了以下操作:
tokenizer = RegexpTokenizer(r'\w+')
tokenized = data['data_column'].apply(tokenizer.tokenize)
tokenized
标记化后低于输出
0 [ANOTHER,SAMPLE,AS,OUTPUT,MSG...
1 [A,SAMPLE,TEXT,FOR,ILLUSTRATION...
Name: data_column, dtype: object
我尝试使用以下方法删除停用词:
stop_words = set(stopwords.words('english'))
filtered_sentence = [w for w in tokenized if not w in stop_words]
filtered_sentence = []
for w in tokenized:
if w not in stop_words:
filtered_sentence.append(w)
我收到错误:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-272-d4a699384ffc> in <module>()
2 stop_words = set(stopwords.words('english'))
3
----> 4 filtered_sentence = [w for w in tokenized if not w in stop_words]
5
6 filtered_sentence = []
TypeError: unhashable type: 'list'
您需要 .apply()
从系列列表中筛选列表,因为语料库包含小写单词,您需要在搜索前使用 .lower() 即
stop_words = set(stopwords.words('english'))
filtered_sentence = tokenized.apply(lambda x : [w for w in x if w.lower() not in stop_words])
样本运行
from nltk.corpus import stopwords
stop = set(stopwords.words('english'))
df = pd.DataFrame({'words': [['A','SAMPLE','AS','OUTPUT','MSG']]})
df['words'].apply(lambda x : [i for i in x if not i.lower() in stop])
0 [SAMPLE, OUTPUT, MSG]
Name: words, dtype: object
我正在尝试删除停用词并尝试了以下操作:
tokenizer = RegexpTokenizer(r'\w+')
tokenized = data['data_column'].apply(tokenizer.tokenize)
tokenized
标记化后低于输出
0 [ANOTHER,SAMPLE,AS,OUTPUT,MSG...
1 [A,SAMPLE,TEXT,FOR,ILLUSTRATION...
Name: data_column, dtype: object
我尝试使用以下方法删除停用词:
stop_words = set(stopwords.words('english'))
filtered_sentence = [w for w in tokenized if not w in stop_words]
filtered_sentence = []
for w in tokenized:
if w not in stop_words:
filtered_sentence.append(w)
我收到错误:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-272-d4a699384ffc> in <module>()
2 stop_words = set(stopwords.words('english'))
3
----> 4 filtered_sentence = [w for w in tokenized if not w in stop_words]
5
6 filtered_sentence = []
TypeError: unhashable type: 'list'
您需要 .apply()
从系列列表中筛选列表,因为语料库包含小写单词,您需要在搜索前使用 .lower() 即
stop_words = set(stopwords.words('english'))
filtered_sentence = tokenized.apply(lambda x : [w for w in x if w.lower() not in stop_words])
样本运行
from nltk.corpus import stopwords
stop = set(stopwords.words('english'))
df = pd.DataFrame({'words': [['A','SAMPLE','AS','OUTPUT','MSG']]})
df['words'].apply(lambda x : [i for i in x if not i.lower() in stop])
0 [SAMPLE, OUTPUT, MSG]
Name: words, dtype: object