从 pyspark 中的列中删除非英语单词

Remove non-english words from column in pyspark

我正在处理一个 pyspark 数据框,如下所示:

+-------+--------------------------------------------------+
|     id|                                             words|
+-------+--------------------------------------------------+
|1475569|[pt, m, reporting, delivery, scam, thank, 0a, 0...|
|1475568|[, , delivered, trblake, yahoo, com, received, ...|
|1475566|[,  marco, v, washin, gton, thursday, de, cembe...|
|1475565|[, marco, v, washin, gton, wednesday, de, cembe...|
|1475563|[joyce, 20, begin, forwarded, message, 20, memo...|
+-------+--------------------------------------------------+

df 的类型:

id: 'bigint'
words: 'array<string>'

我想从 'words' 列中删除非英语单词(包括数值或带数字的单词,例如 Bun20),我已经删除了停用词,但是如何删除其他非专栏中的英文单词?

请帮忙。

您可以使用 UDF 检查数组中的每个单词是否在 nltk 语料库中:

import pyspark.sql.functions as F
import nltk
from nltk.stem import WordNetLemmatizer
wnl = WordNetLemmatizer()

nltk.download('words')
nltk.download('wordnet')

@F.udf('array<string>')
def remove_words(words):
    return [word for word in words if wnl.lemmatize(word) in nltk.corpus.words.words()]

df2 = df.withColumn('words', remove_words('words'))