使用 PySpark 分解数组值

Explode array values using PySpark

我是 pyspark 的新手,我需要分解我的值数组,以便将每个值分配给一个新列。我尝试使用 explode 但我无法得到所需的 output.Below 是我的输出

+---------------+----------+------------------+----------+---------+------------+--------------------+
|account_balance|account_id|credit_Card_Number|first_name|last_name|phone_number|        transactions|
+---------------+----------+------------------+----------+---------+------------+--------------------+
|         100000|     12345|             12345|       abc|      xyz|  1234567890|[1000, 01/06/2020...|
|         100000|     12345|             12345|       abc|      xyz|  1234567890|[1100, 02/06/2020...|
|         100000|     12345|             12345|       abc|      xyz|  1234567890|[6146, 02/06/2020...|
|         100000|     12345|             12345|       abc|      xyz|  1234567890|[253, 03/06/2020,...|
|         100000|     12345|             12345|       abc|      xyz|  1234567890|[4521, 04/06/2020...|
|         100000|     12345|             12345|       abc|      xyz|  1234567890|[955, 05/06/2020,...|
+---------------+----------+------------------+----------+---------+------------+--------------------+

下面是程序的架构

root
 |-- account_balance: long (nullable = true)
 |-- account_id: long (nullable = true)
 |-- credit_Card_Number: long (nullable = true)
 |-- first_name: string (nullable = true)
 |-- last_name: string (nullable = true)
 |-- phone_number: long (nullable = true)
 |-- transactions: array (nullable = true)
 |    |-- element: struct (containsNull = true)
 |    |    |-- amount: long (nullable = true)
 |    |    |-- date: string (nullable = true)
 |    |    |-- shop: string (nullable = true)
 |    |    |-- transaction_code: string (nullable = true)

我想要一个输出,其中我有额外的金额、日期、商店、transaction_code 列及其各自的值

amount date        shop     transaction_code
1000   01/06/2020  amazon      buy
1100   02/06/2020  amazon      sell
6146   02/06/2020  ebay        buy
253    03/06/2020  ebay        buy 
4521   04/06/2020  amazon      buy
955    05/06/2020  amazon      buy

使用explode然后拆分struct字段,最后drop新分解的和transactions数组列。

Example:

from pyspark.sql.functions import *

#got only some columns from json
df.printSchema()
#root
# |-- account_balance: long (nullable = true)
# |-- transactions: array (nullable = true)
# |    |-- element: struct (containsNull = true)
# |    |    |-- amount: long (nullable = true)
# |    |    |-- date: string (nullable = true)
df.selectExpr("*","explode(transactions)").select("*","col.*").drop(*['col','transactions']).show()
#+---------------+------+--------+
#|account_balance|amount|    date|
#+---------------+------+--------+
#|             10|  1000|20200202|
#+---------------+------+--------+