通过 numpy 向量化对某些 pyspark 数据帧列进行 NLP 分析
NLP analysis for some pyspark dataframe columns by numpy vectorization
我想对 pyspark 数据帧中的字符串列进行一些 NLP 分析。
df:
year month u_id rating_score p_id review
2010 09 tvwe 1 p_5 I do not like it because its size is not for me.
2011 11 frsa 1 p_7 I am allergic to the peanut elements.
2015 5 ybfd 1 p_2 It is a repeated one, please no more.
2016 7 tbfb 2 p_2 It is not good for my oil hair.
每个 p_id 代表一个项目。
每个 u_id 可能对每个项目都有一些评论。评论可以是几个词、一句话或一段甚至是表情符号。
我想找出项目评分低或高的根本原因。
例如,有多少“u_id”投诉了与物品特性相关的物品尺寸、化学元素过敏等问题。
从How to iterate over rows in a DataFrame in Pandas,我了解到将数据帧转换为numpy数组然后使用向量化进行NLP分析会更有效。
我正在尝试使用 SparkNLP 按年、月、u_id、p_id.
为每个评论提取形容词和名词短语
我不确定如何应用 numpy 向量化来非常有效地执行此操作。
我的py3代码:
from sparknlp.pretrained import PretrainedPipeline
df = spark.sql('select year, month, u_id, p_id, comment from MY_DF where rating_score = 1 and isnull(comment) = false')
import numpy as np
trainseries = df['comment'].apply(lambda x : np.array(x.toArray())).as_matrix().reshape(-1,1)
text = np.apply_along_axis(lambda x : x[0], 1, trainseries) # TypeError: 'Column' object is not callable
pipeline_dl = PretrainedPipeline('explain_document_dl', lang='en') #
result = pipeline_dl.fullAnnotate(text)
代码无效。
我还需要在矢量化中保留其他列(例如年、月、u_id、p_id),并确保 NLP 分析结果可以与年、月、u_id、 p_id嗯。
我不喜欢这个
因为 collect() 太慢了。
谢谢
IIUC,你不需要 Numpy(Spark 在内部处理矢量化),只需执行 transform
然后 select 并从结果数据帧中过滤适当的信息:
from sparknlp.pretrained import PretrainedPipeline
df = spark.sql('select year, month, u_id, p_id, comment from MY_DF where rating_score = 1 and isnull(comment) = false')
df1 = df.withColumnRenamed('comment', 'text')
pipeline_dl = PretrainedPipeline('explain_document_dl', lang='en')
result = pipeline_dl.transform(df1)
df_new = result.selectExpr(
*df1.columns,
'transform(filter(pos, p -> p.result rlike "^(?:NN|JJ)"), x -> x.metadata.word) as words'
)
输出:
df_new.show(10,0)
+-----+-----+----+------------+----+------------------------------------------------+----------------------------+
|years|month|u_id|rating_score|p_id|text |words |
+-----+-----+----+------------+----+------------------------------------------------+----------------------------+
|2010 |09 |tvwe|1 |p_5 |I do not like it because its size is not for me.|[size] |
|2011 |11 |frsa|1 |p_7 |I am allergic to the peanut elements. |[allergic, peanut, elements]|
|2015 |5 |ybfd|1 |p_2 |It is a repeated one, please no more. |[more] |
|2016 |7 |tbfb|2 |p_2 |It is not good for my oil hair. |[good, oil, hair] |
+-----+-----+----+------------+----+------------------------------------------------+----------------------------+
注:
(1) result = pipeline.fullAnnotate(df,'comment')
是将 comment
重命名为 text
然后执行 pipeline.transform(df1)
的快捷方式。 fullAnnotate 的第一个参数可以是 DataFrame、List 或 String。
(2) 来自 https://www.ling.upenn.edu/courses/Fall_2003/ling001/penn_treebank_pos.html
的 POS 标签列表
我想对 pyspark 数据帧中的字符串列进行一些 NLP 分析。
df:
year month u_id rating_score p_id review
2010 09 tvwe 1 p_5 I do not like it because its size is not for me.
2011 11 frsa 1 p_7 I am allergic to the peanut elements.
2015 5 ybfd 1 p_2 It is a repeated one, please no more.
2016 7 tbfb 2 p_2 It is not good for my oil hair.
每个 p_id 代表一个项目。 每个 u_id 可能对每个项目都有一些评论。评论可以是几个词、一句话或一段甚至是表情符号。
我想找出项目评分低或高的根本原因。 例如,有多少“u_id”投诉了与物品特性相关的物品尺寸、化学元素过敏等问题。
从How to iterate over rows in a DataFrame in Pandas,我了解到将数据帧转换为numpy数组然后使用向量化进行NLP分析会更有效。
我正在尝试使用 SparkNLP 按年、月、u_id、p_id.
为每个评论提取形容词和名词短语我不确定如何应用 numpy 向量化来非常有效地执行此操作。
我的py3代码:
from sparknlp.pretrained import PretrainedPipeline
df = spark.sql('select year, month, u_id, p_id, comment from MY_DF where rating_score = 1 and isnull(comment) = false')
import numpy as np
trainseries = df['comment'].apply(lambda x : np.array(x.toArray())).as_matrix().reshape(-1,1)
text = np.apply_along_axis(lambda x : x[0], 1, trainseries) # TypeError: 'Column' object is not callable
pipeline_dl = PretrainedPipeline('explain_document_dl', lang='en') #
result = pipeline_dl.fullAnnotate(text)
代码无效。 我还需要在矢量化中保留其他列(例如年、月、u_id、p_id),并确保 NLP 分析结果可以与年、月、u_id、 p_id嗯。
我不喜欢这个
谢谢
IIUC,你不需要 Numpy(Spark 在内部处理矢量化),只需执行 transform
然后 select 并从结果数据帧中过滤适当的信息:
from sparknlp.pretrained import PretrainedPipeline
df = spark.sql('select year, month, u_id, p_id, comment from MY_DF where rating_score = 1 and isnull(comment) = false')
df1 = df.withColumnRenamed('comment', 'text')
pipeline_dl = PretrainedPipeline('explain_document_dl', lang='en')
result = pipeline_dl.transform(df1)
df_new = result.selectExpr(
*df1.columns,
'transform(filter(pos, p -> p.result rlike "^(?:NN|JJ)"), x -> x.metadata.word) as words'
)
输出:
df_new.show(10,0)
+-----+-----+----+------------+----+------------------------------------------------+----------------------------+
|years|month|u_id|rating_score|p_id|text |words |
+-----+-----+----+------------+----+------------------------------------------------+----------------------------+
|2010 |09 |tvwe|1 |p_5 |I do not like it because its size is not for me.|[size] |
|2011 |11 |frsa|1 |p_7 |I am allergic to the peanut elements. |[allergic, peanut, elements]|
|2015 |5 |ybfd|1 |p_2 |It is a repeated one, please no more. |[more] |
|2016 |7 |tbfb|2 |p_2 |It is not good for my oil hair. |[good, oil, hair] |
+-----+-----+----+------------+----+------------------------------------------------+----------------------------+
注:
(1) result = pipeline.fullAnnotate(df,'comment')
是将 comment
重命名为 text
然后执行 pipeline.transform(df1)
的快捷方式。 fullAnnotate 的第一个参数可以是 DataFrame、List 或 String。
(2) 来自 https://www.ling.upenn.edu/courses/Fall_2003/ling001/penn_treebank_pos.html
的 POS 标签列表