使用 PySpark 删除 spark 数据框中嵌套结构中的列(文本中的详细信息)
Removing columns in a nested struct in a spark dataframe using PySpark (details in text)
我知道我问过类似的问题 但那是针对行过滤的。这次我试图删除列。我曾尝试实现 FILTER
等高阶函数一段时间,但无法使其正常工作。我认为我需要的是 SELECT
高阶函数,但它似乎不存在。感谢您的帮助!
我正在使用 pyspark,我有一个数据框对象 df
,这就是 df.printSchema()
的输出结果
root
|-- M_MRN: string (nullable = true)
|-- measurements: array (nullable = true)
| |-- element: struct (containsNull = true)
| | |-- Observation_ID: string (nullable = true)
| | |-- Observation_Name: string (nullable = true)
| | |-- Observation_Result: string (nullable = true)
我只想保留 'measurements' 中的 'Observation_ID' 或 'Observation_Result' 列。所以目前当我 运行 df.select('measurements').take(2)
我得到
[Row(measurements=[Row(Observation_ID='5', Observation_Name='ABC', Observation_Result='108/72'),
Row(Observation_ID='11', Observation_Name='ABC', Observation_Result='70'),
Row(Observation_ID='10', Observation_Name='ABC', Observation_Result='73.029'),
Row(Observation_ID='14', Observation_Name='XYZ', Observation_Result='23.1')]),
Row(measurements=[Row(Observation_ID='2', Observation_Name='ZZZ', Observation_Result='3/4'),
Row(Observation_ID='5', Observation_Name='ABC', Observation_Result='7')])]
我希望在完成上述过滤后 运行 df.select('measurements').take(2)
我得到
[Row(measurements=[Row(Observation_ID='5', Observation_Result='108/72'),
Row(Observation_ID='11', Observation_Result='70'),
Row(Observation_ID='10', Observation_Result='73.029'),
Row(Observation_ID='14', Observation_Result='23.1')]),
Row(measurements=[Row(Observation_ID='2', Observation_Result='3/4'),
Row(Observation_ID='5', Observation_Result='7')])]
有没有办法在 pyspark 中做到这一点?期待您的帮助!
您可以使用 higher order function
transform
到 select
你想要的字段并将它们放在 struct
中。
from pyspark.sql import functions as F
df.withColumn("measurements",F.expr("""transform(measurements\
,x-> struct(x.Observation_ID as Observation_ID,\
x.Observation_Result as Observation_Result))""")).printSchema()
#root
#|-- measurements: array (nullable = true)
#| |-- element: struct (containsNull = false)
#| | |-- Observation_ID: string (nullable = true)
#| | |-- Observation_Result: string (nullable = true)
对于任何寻找适用于旧版 pyspark 的答案的人,这里是一个使用 udfs 的答案:
import pyspark.sql.functions as f
from pyspark.sql.types import ArrayType, LongType, StringType, StructField, StructType
_measurement_type = ArrayType(StructType([
StructField('Observation_ID', StringType(), True),
StructField('Observation_Result', StringType(), True)
]))
@f.udf(returnType=_measurement_type)
def higher_order_select(measurements):
return [(m.Observation_ID, m.Observation_Result) for m in measurements]
df.select(higher_order_select('measurements').alias('measurements')).printSchema()
打印
root
|-- measurements: array (nullable = true)
| |-- element: struct (containsNull = true)
| | |-- Observation_ID: string (nullable = true)
| | |-- Observation_Result: string (nullable = true)
我知道我问过类似的问题 FILTER
等高阶函数一段时间,但无法使其正常工作。我认为我需要的是 SELECT
高阶函数,但它似乎不存在。感谢您的帮助!
我正在使用 pyspark,我有一个数据框对象 df
,这就是 df.printSchema()
的输出结果
root
|-- M_MRN: string (nullable = true)
|-- measurements: array (nullable = true)
| |-- element: struct (containsNull = true)
| | |-- Observation_ID: string (nullable = true)
| | |-- Observation_Name: string (nullable = true)
| | |-- Observation_Result: string (nullable = true)
我只想保留 'measurements' 中的 'Observation_ID' 或 'Observation_Result' 列。所以目前当我 运行 df.select('measurements').take(2)
我得到
[Row(measurements=[Row(Observation_ID='5', Observation_Name='ABC', Observation_Result='108/72'),
Row(Observation_ID='11', Observation_Name='ABC', Observation_Result='70'),
Row(Observation_ID='10', Observation_Name='ABC', Observation_Result='73.029'),
Row(Observation_ID='14', Observation_Name='XYZ', Observation_Result='23.1')]),
Row(measurements=[Row(Observation_ID='2', Observation_Name='ZZZ', Observation_Result='3/4'),
Row(Observation_ID='5', Observation_Name='ABC', Observation_Result='7')])]
我希望在完成上述过滤后 运行 df.select('measurements').take(2)
我得到
[Row(measurements=[Row(Observation_ID='5', Observation_Result='108/72'),
Row(Observation_ID='11', Observation_Result='70'),
Row(Observation_ID='10', Observation_Result='73.029'),
Row(Observation_ID='14', Observation_Result='23.1')]),
Row(measurements=[Row(Observation_ID='2', Observation_Result='3/4'),
Row(Observation_ID='5', Observation_Result='7')])]
有没有办法在 pyspark 中做到这一点?期待您的帮助!
您可以使用 higher order function
transform
到 select
你想要的字段并将它们放在 struct
中。
from pyspark.sql import functions as F
df.withColumn("measurements",F.expr("""transform(measurements\
,x-> struct(x.Observation_ID as Observation_ID,\
x.Observation_Result as Observation_Result))""")).printSchema()
#root
#|-- measurements: array (nullable = true)
#| |-- element: struct (containsNull = false)
#| | |-- Observation_ID: string (nullable = true)
#| | |-- Observation_Result: string (nullable = true)
对于任何寻找适用于旧版 pyspark 的答案的人,这里是一个使用 udfs 的答案:
import pyspark.sql.functions as f
from pyspark.sql.types import ArrayType, LongType, StringType, StructField, StructType
_measurement_type = ArrayType(StructType([
StructField('Observation_ID', StringType(), True),
StructField('Observation_Result', StringType(), True)
]))
@f.udf(returnType=_measurement_type)
def higher_order_select(measurements):
return [(m.Observation_ID, m.Observation_Result) for m in measurements]
df.select(higher_order_select('measurements').alias('measurements')).printSchema()
打印
root
|-- measurements: array (nullable = true)
| |-- element: struct (containsNull = true)
| | |-- Observation_ID: string (nullable = true)
| | |-- Observation_Result: string (nullable = true)