关系化 json 嵌套数组
Relationalize json nested array
我有以下目录,想用AWS glue把它压扁?
| accountId | resourceId | items |
|-----------|------------|-----------------------------------------------------------------|
| 1 | r1 | [{name: "tool", version: "1.0"}, {name: "app", version: "1.0"}] |
| 1 | r2 | [{name: "tool", version: "2.0"}, {name: "app", version: "2.0"}] |
| 2 | r3 | [{name: "tool", version: "3.0"}, {name: "app", version: "3.0"}] |
我想将其展平为以下内容:
| accountId | resourceId | name | version |
|-----------|------------|------|---------|
| 1 | r1 | tool | 1.0 |
| 1 | r1 | app | 1.0 |
| 1 | r2 | tool | 2.0 |
| 1 | r2 | app | 2.0 |
| 2 | r3 | tool | 3.0 |
| 2 | r3 | app | 3.0 |
Relationalize.apply
只能压平嵌套项,不能将accountId
和resourceId
带入结果,请问有什么办法解决吗?
在 Pyspark 中,如果数组元素的结构是有效的 JSON,如下所示:
{"name": "tool", "version": "1.0"}
您可以使用 explode
+ from_json
将其解析为 struct
。
但是这里你需要做一些清洁工作。一种方法是在分解 items
列以获取地图列后使用 str_to_map
函数。然后再次分解它并旋转以获得地图键作为列。
df = spark.createDataFrame([
(1, "r1", ['{name: "tool", version: "1.0"}', '{name: "app", version: "1.0"}']),
(1, "r2", ['{name: "tool", version: "2.0"}', '{name: "app", version: "2.0"}']),
(2, "r3", ['{name: "tool", version: "3.0"}', '{name: "app", version: "3.0"}'])
], ["accountId", "resourceId", "items"])
# remove leading and trailing {} and convert to map
sql_expr = "str_to_map(trim(BOTH '{}' FROM items), ',', ':')"
df.withColumn("items", explode(col("items"))) \
.select(col("*"), explode(expr(sql_expr))) \
.groupBy("accountId", "resourceId", "items") \
.pivot("key") \
.agg(first(expr("trim(BOTH '\"' FROM trim(value))"))) \
.drop("items")\
.show()
#+---------+----------+--------+----+
#|accountId|resourceId| version|name|
#+---------+----------+--------+----+
#| 1| r1| 1.0| app|
#| 1| r2| 2.0| app|
#| 2| r3| 3.0|tool|
#| 2| r3| 3.0| app|
#| 1| r2| 2.0|tool|
#| 1| r1| 1.0|tool|
#+---------+----------+--------+----+
另一个简单的方法,如果你知道所有的键,是使用 regexp_extract
从字符串中提取值:
df.withColumn("items", explode(col("items"))) \
.withColumn("name", regexp_extract("items", "name: \"(.+?)\"[,}]", 1)) \
.withColumn("version", regexp_extract("items", "version: \"(.+?)\"[,}]", 1)) \
.drop("items") \
.show()
我有以下目录,想用AWS glue把它压扁?
| accountId | resourceId | items |
|-----------|------------|-----------------------------------------------------------------|
| 1 | r1 | [{name: "tool", version: "1.0"}, {name: "app", version: "1.0"}] |
| 1 | r2 | [{name: "tool", version: "2.0"}, {name: "app", version: "2.0"}] |
| 2 | r3 | [{name: "tool", version: "3.0"}, {name: "app", version: "3.0"}] |
我想将其展平为以下内容:
| accountId | resourceId | name | version |
|-----------|------------|------|---------|
| 1 | r1 | tool | 1.0 |
| 1 | r1 | app | 1.0 |
| 1 | r2 | tool | 2.0 |
| 1 | r2 | app | 2.0 |
| 2 | r3 | tool | 3.0 |
| 2 | r3 | app | 3.0 |
Relationalize.apply
只能压平嵌套项,不能将accountId
和resourceId
带入结果,请问有什么办法解决吗?
在 Pyspark 中,如果数组元素的结构是有效的 JSON,如下所示:
{"name": "tool", "version": "1.0"}
您可以使用 explode
+ from_json
将其解析为 struct
。
但是这里你需要做一些清洁工作。一种方法是在分解 items
列以获取地图列后使用 str_to_map
函数。然后再次分解它并旋转以获得地图键作为列。
df = spark.createDataFrame([
(1, "r1", ['{name: "tool", version: "1.0"}', '{name: "app", version: "1.0"}']),
(1, "r2", ['{name: "tool", version: "2.0"}', '{name: "app", version: "2.0"}']),
(2, "r3", ['{name: "tool", version: "3.0"}', '{name: "app", version: "3.0"}'])
], ["accountId", "resourceId", "items"])
# remove leading and trailing {} and convert to map
sql_expr = "str_to_map(trim(BOTH '{}' FROM items), ',', ':')"
df.withColumn("items", explode(col("items"))) \
.select(col("*"), explode(expr(sql_expr))) \
.groupBy("accountId", "resourceId", "items") \
.pivot("key") \
.agg(first(expr("trim(BOTH '\"' FROM trim(value))"))) \
.drop("items")\
.show()
#+---------+----------+--------+----+
#|accountId|resourceId| version|name|
#+---------+----------+--------+----+
#| 1| r1| 1.0| app|
#| 1| r2| 2.0| app|
#| 2| r3| 3.0|tool|
#| 2| r3| 3.0| app|
#| 1| r2| 2.0|tool|
#| 1| r1| 1.0|tool|
#+---------+----------+--------+----+
另一个简单的方法,如果你知道所有的键,是使用 regexp_extract
从字符串中提取值:
df.withColumn("items", explode(col("items"))) \
.withColumn("name", regexp_extract("items", "name: \"(.+?)\"[,}]", 1)) \
.withColumn("version", regexp_extract("items", "version: \"(.+?)\"[,}]", 1)) \
.drop("items") \
.show()