删除pyspark中的嵌套列
Remove nested column in pyspark
我有一个带有列结果的 pyspark 数据框。在结果列中,我想删除该列
"Attributes"。
数据框的模式是:(结果中有更多的列,但为了方便我没有显示它们,因为模式很大)
|-- results: struct (nullable = true)
| |-- l: array (nullable = true)
| | |-- element: struct (containsNull = true)
| | | |-- m: struct (nullable = true)
| | | | |-- Attributes: struct (nullable = true)
| | | | | |-- m: struct (nullable = true)
| | | | | | |-- Score: struct (nullable = true)
| | | | | | | |-- n: string (nullable = true)
| | | | |-- OtherInfo: struct (nullable = true)
| | | | | |-- l: array (nullable = true)
| | | | | | |-- element: struct (containsNull = true)
| | | | | | | |-- m: struct (nullable = true)
| | | | | | | | |-- Name: string (nullable = true)
在 pyspark 中没有 udf 如何做到这一点?
编辑:
一行是:
{
"results" : {
"l" : [
{
"m":{
"Attributes" : {
"m" : {
"Score" : {"n" : "85"}
}
},
"OtherInfo":{
"l" : [
{
"m" : {
"Name" : {"john"}
}
},
{
"m" : {
"Name" : "Cena"}
}
]
}
}
}
]
}
}
要从结构类型中删除字段,您必须创建一个包含所有元素但要从原始结构中删除的元素的新结构。
这里,由于 results
下的字段 l
是一个数组,您可以使用 transform
函数(Spark 2.4+)更新其所有结构元素,如下所示:
from pyspark.sql.functions import struct, expr
t_expr = "transform(results.l, x -> struct(struct(x.m.OtherInfo as OtherInfo) as m))"
df = df.withColumn("results", struct(expr(t_expr).alias("l")))
对于数组中的每个元素 x
,我们创建仅包含 x.m.OtherInfo
字段的新结构。
df.printSchema()
#root
# |-- results: struct (nullable = false)
# | |-- l: array (nullable = true)
# | | |-- element: struct (containsNull = false)
# | | | |-- m: struct (nullable = false)
# | | | | |-- OtherInfo: struct (nullable = true)
# | | | | | |-- l: array (nullable = true)
# | | | | | | |-- element: struct (containsNull = true)
# | | | | | | | |-- m: struct (nullable = true)
# | | | | | | | | |-- Name: string (nullable = true)
我有一个带有列结果的 pyspark 数据框。在结果列中,我想删除该列 "Attributes"。 数据框的模式是:(结果中有更多的列,但为了方便我没有显示它们,因为模式很大)
|-- results: struct (nullable = true)
| |-- l: array (nullable = true)
| | |-- element: struct (containsNull = true)
| | | |-- m: struct (nullable = true)
| | | | |-- Attributes: struct (nullable = true)
| | | | | |-- m: struct (nullable = true)
| | | | | | |-- Score: struct (nullable = true)
| | | | | | | |-- n: string (nullable = true)
| | | | |-- OtherInfo: struct (nullable = true)
| | | | | |-- l: array (nullable = true)
| | | | | | |-- element: struct (containsNull = true)
| | | | | | | |-- m: struct (nullable = true)
| | | | | | | | |-- Name: string (nullable = true)
在 pyspark 中没有 udf 如何做到这一点?
编辑:
一行是:
{
"results" : {
"l" : [
{
"m":{
"Attributes" : {
"m" : {
"Score" : {"n" : "85"}
}
},
"OtherInfo":{
"l" : [
{
"m" : {
"Name" : {"john"}
}
},
{
"m" : {
"Name" : "Cena"}
}
]
}
}
}
]
}
}
要从结构类型中删除字段,您必须创建一个包含所有元素但要从原始结构中删除的元素的新结构。
这里,由于 results
下的字段 l
是一个数组,您可以使用 transform
函数(Spark 2.4+)更新其所有结构元素,如下所示:
from pyspark.sql.functions import struct, expr
t_expr = "transform(results.l, x -> struct(struct(x.m.OtherInfo as OtherInfo) as m))"
df = df.withColumn("results", struct(expr(t_expr).alias("l")))
对于数组中的每个元素 x
,我们创建仅包含 x.m.OtherInfo
字段的新结构。
df.printSchema()
#root
# |-- results: struct (nullable = false)
# | |-- l: array (nullable = true)
# | | |-- element: struct (containsNull = false)
# | | | |-- m: struct (nullable = false)
# | | | | |-- OtherInfo: struct (nullable = true)
# | | | | | |-- l: array (nullable = true)
# | | | | | | |-- element: struct (containsNull = true)
# | | | | | | | |-- m: struct (nullable = true)
# | | | | | | | | |-- Name: string (nullable = true)