Pyspark 展平列内的 Json 值

Pyspark flatten Json value inside column

我的数据框如下

Json_column
------------------------------------------------------------------------------------------------------------------------------------------------------------------------
{"coordinates":[null,null,null,null,null],"datetime":[1642602463000,1642600679000,1642598301000,1642598232000,1642596529000],"followers_count":[568,5037,76,4325,107]}
{"coordinates":[null,null,null,null,null],"datetime":[1641919643000,1641918112000,1641918082000,1641917719000,1641916830000],"followers_count":[233,63,99,750,186]}
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------

需要将此数据框展平如下

+---------------------------------------------------------------------------+-------------------------------------------------------------------------------+--------------------------+

|datetime                                                                   |coordinates                                                                |followers_count       |

+---------------------------------------------------------------------------+-------------------------------------------------------------------------------+--------------------------+

|1642602463000                                  | null                                  | 568              |

|1642600679000                                  | null                                      | 5037             |

|1642598301000                                  | null                                  | 76               |

|1642598232000                                  | null                                      | 4325             |

|1642596529000                                  | null                                      | 107              |

|1642602463000                                  | null                                  | 233              |

|1641918112000                                  | null                                      | 63               |

|1641918082000                                  | null                                  | 99               |

|1641917719000                                  | null                                      | 750              |

|1641916830000                                  | null                                      | 186              |

+---------------------------------------------------------------------------+-------------------------------------------------------------------------------+--------------------------+

我试过这个代码

df.withColumn("datetime",F.get_json_object(col("Json_column"),"$.datetime")).
withColumn("coordinates",F.get_json_object(col("Json_column"),"$.coordinates")).
withColumn("followers_count",F.get_json_object(col("Json_column"),"$.followers_count"))
.select('datetime','followers_count','coordinates')

但它 return 数组列表不展平数据

使用from_json将json字符串解析为struct类型,然后arrays_zipstruct中的数组字段并展开结果:

from pyspark.sql import functions as F

result = df.withColumn(
    "Json_column",
    F.from_json(
        "Json_column",
        "struct<coordinates:array<string>,datetime:array<long>,followers_count:array<int>>"
    )
).withColumn(
    "Json_column",
    F.arrays_zip("Json_column.datetime", "Json_column.coordinates", "Json_column.followers_count")
).selectExpr(
    "inline(Json_column)"
)

result.show()
#+-------------+-----------+---------------+
#|datetime     |coordinates|followers_count|
#+-------------+-----------+---------------+
#|1642602463000|null       |568            |
#|1642600679000|null       |5037           |
#|1642598301000|null       |76             |
#|1642598232000|null       |4325           |
#|1642596529000|null       |107            |
#|1641919643000|null       |233            |
#|1641918112000|null       |63             |
#|1641918082000|null       |99             |
#|1641917719000|null       |750            |
#|1641916830000|null       |186            |
#+-------------+-----------+---------------+