将客户数据汇总成一行
Rolling Up Customer Data into One Row
我在 Apache Spark 中创建了一个查询,希望获取多行客户数据并将其汇总为一行,显示他们打开了哪些类型的产品。所以数据看起来像这样:
Customer Product
1 Savings
1 Checking
1 Auto
最终看起来像这样:
Customer Product
1 Savings/Checking/Auto
查询当前仍有多行。我试过分组依据,但这并没有显示客户拥有的多种产品,而是只显示一种产品。
Apache Spark 或 SQL(与 Apache 非常相似)有没有办法做到这一点?不幸的是,我没有 MYSQL 也不认为 IT 会为我安装它。
SELECT
"ACCOUNT"."account_customerkey" AS "account_customerkey",
max(
concat(case when Savings=1 then ' Savings'end,
case when Checking=1 then ' Checking 'end,
case when CD=1 then ' CD /'end,
case when IRA=1 then ' IRA /'end,
case when StandardLoan=1 then ' SL /'end,
case when Auto=1 then ' Auto /'end,
case when Mortgage=1 then ' Mortgage /'end,
case when CreditCard=1 then ' CreditCard 'end)) AS Description
FROM "ACCOUNT" "ACCOUNT"
inner join (
SELECT
"ACCOUNT"."account_customerkey" AS "customerkey",
CASE WHEN "ACCOUNT"."account_producttype" = 'Savings' THEN 1 ELSE NULL END AS Savings,
CASE WHEN "ACCOUNT"."account_producttype" = 'Checking' THEN 1 ELSE NULL END AS Checking,
CASE WHEN "ACCOUNT"."account_producttype" = 'CD' THEN 1 ELSE NULL END AS CD,
CASE WHEN "ACCOUNT"."account_producttype" = 'IRA' THEN 1 ELSE NULL END AS IRA,
CASE WHEN "ACCOUNT"."account_producttype" = 'Standard Loan' THEN 1 ELSE NULL END AS StandardLoan,
CASE WHEN "ACCOUNT"."account_producttype" = 'Auto' THEN 1 ELSE NULL END AS Auto,
CASE WHEN "ACCOUNT"."account_producttype" = 'Mortgage' THEN 1 ELSE NULL END AS Mortgage,
CASE WHEN "ACCOUNT"."account_producttype" = 'Credit Card' THEN 1 ELSE NULL END AS CreditCard
FROM "ACCOUNT" "ACCOUNT"
)a on "account_customerkey" =a."customerkey"
GROUP BY
"ACCOUNT"."account_customerkey"
请试试这个。
scala> df.show()
+--------+--------+
|Customer| Product|
+--------+--------+
| 1| Savings|
| 1|Checking|
| 1| Auto|
| 2| Savings|
| 2| Auto|
| 3|Checking|
+--------+--------+
scala> df.groupBy($"Customer").agg(collect_list($"Product").as("Product")).select($"Customer",concat_ws(",",$"Product").as("Product")).show(false)
+--------+---------------------+
|Customer|Product |
+--------+---------------------+
|1 |Savings,Checking,Auto|
|3 |Checking |
|2 |Savings,Auto |
+--------+---------------------+
scala>
参见https://docs.microsoft.com/en-us/azure/databricks/sql/language-manual/functions/collect_list及相关函数
您需要使用 collect_list
,它可用于 SQL 或 %sql。
%sql
select id, collect_list(num)
from t1
group by id
我用的是自己的数据,你需要裁剪。只是以更原生的 SQL 形式进行演示。
我在 Apache Spark 中创建了一个查询,希望获取多行客户数据并将其汇总为一行,显示他们打开了哪些类型的产品。所以数据看起来像这样:
Customer Product
1 Savings
1 Checking
1 Auto
最终看起来像这样:
Customer Product
1 Savings/Checking/Auto
查询当前仍有多行。我试过分组依据,但这并没有显示客户拥有的多种产品,而是只显示一种产品。
Apache Spark 或 SQL(与 Apache 非常相似)有没有办法做到这一点?不幸的是,我没有 MYSQL 也不认为 IT 会为我安装它。
SELECT
"ACCOUNT"."account_customerkey" AS "account_customerkey",
max(
concat(case when Savings=1 then ' Savings'end,
case when Checking=1 then ' Checking 'end,
case when CD=1 then ' CD /'end,
case when IRA=1 then ' IRA /'end,
case when StandardLoan=1 then ' SL /'end,
case when Auto=1 then ' Auto /'end,
case when Mortgage=1 then ' Mortgage /'end,
case when CreditCard=1 then ' CreditCard 'end)) AS Description
FROM "ACCOUNT" "ACCOUNT"
inner join (
SELECT
"ACCOUNT"."account_customerkey" AS "customerkey",
CASE WHEN "ACCOUNT"."account_producttype" = 'Savings' THEN 1 ELSE NULL END AS Savings,
CASE WHEN "ACCOUNT"."account_producttype" = 'Checking' THEN 1 ELSE NULL END AS Checking,
CASE WHEN "ACCOUNT"."account_producttype" = 'CD' THEN 1 ELSE NULL END AS CD,
CASE WHEN "ACCOUNT"."account_producttype" = 'IRA' THEN 1 ELSE NULL END AS IRA,
CASE WHEN "ACCOUNT"."account_producttype" = 'Standard Loan' THEN 1 ELSE NULL END AS StandardLoan,
CASE WHEN "ACCOUNT"."account_producttype" = 'Auto' THEN 1 ELSE NULL END AS Auto,
CASE WHEN "ACCOUNT"."account_producttype" = 'Mortgage' THEN 1 ELSE NULL END AS Mortgage,
CASE WHEN "ACCOUNT"."account_producttype" = 'Credit Card' THEN 1 ELSE NULL END AS CreditCard
FROM "ACCOUNT" "ACCOUNT"
)a on "account_customerkey" =a."customerkey"
GROUP BY
"ACCOUNT"."account_customerkey"
请试试这个。
scala> df.show()
+--------+--------+
|Customer| Product|
+--------+--------+
| 1| Savings|
| 1|Checking|
| 1| Auto|
| 2| Savings|
| 2| Auto|
| 3|Checking|
+--------+--------+
scala> df.groupBy($"Customer").agg(collect_list($"Product").as("Product")).select($"Customer",concat_ws(",",$"Product").as("Product")).show(false)
+--------+---------------------+
|Customer|Product |
+--------+---------------------+
|1 |Savings,Checking,Auto|
|3 |Checking |
|2 |Savings,Auto |
+--------+---------------------+
scala>
参见https://docs.microsoft.com/en-us/azure/databricks/sql/language-manual/functions/collect_list及相关函数
您需要使用 collect_list
,它可用于 SQL 或 %sql。
%sql
select id, collect_list(num)
from t1
group by id
我用的是自己的数据,你需要裁剪。只是以更原生的 SQL 形式进行演示。