在不丢失空值的情况下分解 Pyspark 中的映射列

Explode map column in Pyspark without losing null values

在 Pyspark 2.2 中是否有任何优雅的方法来展开地图列而不丢失空值? Explode_outer 在 Pyspark 2.3 中引入

受影响列的架构是:

|-- foo: map (nullable = true)
 |    |-- key: string
 |    |-- value: struct (valueContainsNull = true)
 |    |    |-- first: long (nullable = true)
 |    |    |-- last: long (nullable = true)

我想用一些虚拟值替换空 Map 以便能够在不丢失空值的情况下分解整个数据框。我试过类似的方法,但出现错误:

from pyspark.sql.functions import when, size, col
df = spark.read.parquet("path").select(
        when(size(col("foo")) == 0, {"key": [0, 0]}).alias("bar")
    )

错误:

Py4JJavaError: An error occurred while calling z:org.apache.spark.sql.functions.when.
: java.lang.RuntimeException: Unsupported literal type class java.util.HashMap {key=[0, 0]}
    at org.apache.spark.sql.catalyst.expressions.Literal$.apply(literals.scala:77)
    at org.apache.spark.sql.catalyst.expressions.Literal$$anonfun$create.apply(literals.scala:163)
    at org.apache.spark.sql.catalyst.expressions.Literal$$anonfun$create.apply(literals.scala:163)
    at scala.util.Try.getOrElse(Try.scala:79)
    at org.apache.spark.sql.catalyst.expressions.Literal$.create(literals.scala:162)
    at org.apache.spark.sql.functions$.typedLit(functions.scala:112)
    at org.apache.spark.sql.functions$.lit(functions.scala:95)
    at org.apache.spark.sql.functions$.when(functions.scala:1256)
    at org.apache.spark.sql.functions.when(functions.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
    at py4j.Gateway.invoke(Gateway.java:280)
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
    at py4j.commands.CallCommand.execute(CallCommand.java:79)
    at py4j.GatewayConnection.run(GatewayConnection.java:214)
    at java.lang.Thread.run(Thread.java:748)

所以我终于成功了。我用一些虚拟值替换了空地图,然后使用爆炸和删除原始列。

replace_empty_map = udf(lambda x: {"key": [0, 1]} if len(x) == 0 else x, 
             MapType(StringType(), 
                     StructType(
                         [StructField("first", LongType()), StructField("last", LongType())]
                     )
                    )
            )

df = df.withColumn("foo_replaced",replace_empty_map(df["foo"])).drop("foo")
df = df.select('*', explode('foo_replaced').alias('foo_key', 'foo_val')).drop("foo_replaced")