select 在 SparkSQL 中使用 window 函数 (dense_rank())

select with window function (dense_rank()) in SparkSQL

我有一个 table 其中包含客户购买记录,我需要指定购买是在特定日期时间 window 一个 window 是 8 天,所以如果我有今天购买和 5 天一次购买,如果 window 号 1,这意味着我的购买,但如果我在今天的第一天和 8 天后的下一天购买,第一次购买将在 window 1 和最后一次购买在 window 2

create temporary table transactions
 (client_id int,
 transaction_ts datetime,
 store_id int)

 insert into transactions values 
 (1,'2018-06-01 12:17:37', 1),
 (1,'2018-06-02 13:17:37', 2),
 (1,'2018-06-03 14:17:37', 3),
 (1,'2018-06-09 10:17:37', 2),
 (2,'2018-06-02 10:17:37', 1),
 (2,'2018-06-02 13:17:37', 2),
 (2,'2018-06-08 14:19:37', 3),
 (2,'2018-06-16 13:17:37', 2),
 (2,'2018-06-17 14:17:37', 3)

window 是 8 天,问题是我不明白如何指定 for dense_rank() OVER (PARTITION BY) 查看日期时间并创建 window 8天后, 结果我需要这样的东西

1,'2018-06-01 12:17:37', 1,1
1,'2018-06-02 13:17:37', 2,1
1,'2018-06-03 14:17:37', 3,1
1,'2018-06-09 10:17:37', 2,2
2,'2018-06-02 10:17:37', 1,1
2,'2018-06-02 13:17:37', 2,1
2,'2018-06-08 14:19:37', 3,2
2,'2018-06-16 13:17:37', 2,3
2,'2018-06-17 14:17:37', 3,3

知道如何获得它吗?我可以在 Mysql 或 Spark SQL 中 运行 它,但 Mysql 不支持分区。 仍然找不到解决方案!任何帮助

很可能您可以在 Spark SQL 中使用时间和分区 window 函数解决此问题:

val purchases = Seq((1,"2018-06-01 12:17:37", 1), (1,"2018-06-02 13:17:37", 2), (1,"2018-06-03 14:17:37", 3), (1,"2018-06-09 10:17:37", 2), (2,"2018-06-02 10:17:37", 1), (2,"2018-06-02 13:17:37", 2), (2,"2018-06-08 14:19:37", 3), (2,"2018-06-16 13:17:37", 2), (2,"2018-06-17 14:17:37", 3)).toDF("client_id", "transaction_ts", "store_id")

purchases.show(false)
+---------+-------------------+--------+
|client_id|transaction_ts     |store_id|
+---------+-------------------+--------+
|1        |2018-06-01 12:17:37|1       |
|1        |2018-06-02 13:17:37|2       |
|1        |2018-06-03 14:17:37|3       |
|1        |2018-06-09 10:17:37|2       |
|2        |2018-06-02 10:17:37|1       |
|2        |2018-06-02 13:17:37|2       |
|2        |2018-06-08 14:19:37|3       |
|2        |2018-06-16 13:17:37|2       |
|2        |2018-06-17 14:17:37|3       |
+---------+-------------------+--------+



val groupedByTimeWindow = purchases.groupBy($"client_id", window($"transaction_ts", "8 days")).agg(collect_list("transaction_ts").as("transaction_tss"), collect_list("store_id").as("store_ids"))

val withWindowNumber = groupedByTimeWindow.withColumn("window_number", row_number().over(windowByClient))

withWindowNumber.orderBy("client_id", "window.start").show(false)

    +---------+---------------------------------------------+---------------------------------------------------------------+---------+-------------+
|client_id|window                                       |transaction_tss                                                |store_ids|window_number|
+---------+---------------------------------------------+---------------------------------------------------------------+---------+-------------+
|1        |[2018-05-28 17:00:00.0,2018-06-05 17:00:00.0]|[2018-06-01 12:17:37, 2018-06-02 13:17:37, 2018-06-03 14:17:37]|[1, 2, 3]|1            |
|1        |[2018-06-05 17:00:00.0,2018-06-13 17:00:00.0]|[2018-06-09 10:17:37]                                          |[2]      |2            |
|2        |[2018-05-28 17:00:00.0,2018-06-05 17:00:00.0]|[2018-06-02 10:17:37, 2018-06-02 13:17:37]                     |[1, 2]   |1            |
|2        |[2018-06-05 17:00:00.0,2018-06-13 17:00:00.0]|[2018-06-08 14:19:37]                                          |[3]      |2            |
|2        |[2018-06-13 17:00:00.0,2018-06-21 17:00:00.0]|[2018-06-16 13:17:37, 2018-06-17 14:17:37]                     |[2, 3]   |3            |
+---------+---------------------------------------------+---------------------------------------------------------------+---------+-------------+

如果需要,您可以explode列出来自store_ids或transaction_tss的元素。

希望对您有所帮助!

我没有使用提议的 spark 解决方案,我是用纯 sql 逻辑和游标来完成的。它不是很有效,但我需要完成工作