连续行之间的日期差异 - Pyspark Dataframe
Date difference between consecutive rows - Pyspark Dataframe
我有一个 table 具有以下结构
USER_ID Tweet_ID Date
1 1001 Thu Aug 05 19:11:39 +0000 2010
1 6022 Mon Aug 09 17:51:19 +0000 2010
1 1041 Sun Aug 19 11:10:09 +0000 2010
2 9483 Mon Jan 11 10:51:23 +0000 2012
2 4532 Fri May 21 11:11:11 +0000 2012
3 4374 Sat Jul 10 03:21:23 +0000 2013
3 4334 Sun Jul 11 04:53:13 +0000 2013
基本上我想做的是有一个 PysparkSQL 查询来计算具有相同 user_id 数字的连续记录的日期差异(以秒为单位)。预期结果将是:
1 Sun Aug 19 11:10:09 +0000 2010 - Mon Aug 09 17:51:19 +0000 2010 839930
1 Mon Aug 09 17:51:19 +0000 2010 - Thu Aug 05 19:11:39 +0000 2010 340780
2 Fri May 21 11:11:11 +0000 2012 - Mon Jan 11 10:51:23 +0000 2012 1813212
3 Sun Jul 11 04:53:13 +0000 2013 - Sat Jul 10 03:21:23 +0000 2013 5510
像这样:
df.registerTempTable("df")
sqlContext.sql("""
SELECT *, CAST(date AS bigint) - CAST(lag(date, 1) OVER (
PARTITION BY user_id ORDER BY date) AS bigint)
FROM df""")
另一种方式可能是:
from pyspark.sql.functions import lag
from pyspark.sql.window import Window
df.withColumn("time_intertweet",(df.date.cast("bigint") - lag(df.date.cast("bigint"), 1)
.over(Window.partitionBy("user_id")
.orderBy("date")))
.cast("bigint"))
已编辑感谢@cool_kid
@Joesemy 的回答非常好,但对我不起作用,因为 cast("bigint") 抛出了一个错误。所以我以这种方式使用了 pyspark.sql.functions 模块 中的 datediff 函数并且它起作用了:
from pyspark.sql.functions import *
from pyspark.sql.window import Window
df.withColumn("time_intertweet", datediff(df.date, lag(df.date, 1)
.over(Window.partitionBy("user_id")
.orderBy("date"))))
我有一个 table 具有以下结构
USER_ID Tweet_ID Date
1 1001 Thu Aug 05 19:11:39 +0000 2010
1 6022 Mon Aug 09 17:51:19 +0000 2010
1 1041 Sun Aug 19 11:10:09 +0000 2010
2 9483 Mon Jan 11 10:51:23 +0000 2012
2 4532 Fri May 21 11:11:11 +0000 2012
3 4374 Sat Jul 10 03:21:23 +0000 2013
3 4334 Sun Jul 11 04:53:13 +0000 2013
基本上我想做的是有一个 PysparkSQL 查询来计算具有相同 user_id 数字的连续记录的日期差异(以秒为单位)。预期结果将是:
1 Sun Aug 19 11:10:09 +0000 2010 - Mon Aug 09 17:51:19 +0000 2010 839930
1 Mon Aug 09 17:51:19 +0000 2010 - Thu Aug 05 19:11:39 +0000 2010 340780
2 Fri May 21 11:11:11 +0000 2012 - Mon Jan 11 10:51:23 +0000 2012 1813212
3 Sun Jul 11 04:53:13 +0000 2013 - Sat Jul 10 03:21:23 +0000 2013 5510
像这样:
df.registerTempTable("df")
sqlContext.sql("""
SELECT *, CAST(date AS bigint) - CAST(lag(date, 1) OVER (
PARTITION BY user_id ORDER BY date) AS bigint)
FROM df""")
另一种方式可能是:
from pyspark.sql.functions import lag
from pyspark.sql.window import Window
df.withColumn("time_intertweet",(df.date.cast("bigint") - lag(df.date.cast("bigint"), 1)
.over(Window.partitionBy("user_id")
.orderBy("date")))
.cast("bigint"))
已编辑感谢@cool_kid
@Joesemy 的回答非常好,但对我不起作用,因为 cast("bigint") 抛出了一个错误。所以我以这种方式使用了 pyspark.sql.functions 模块 中的 datediff 函数并且它起作用了:
from pyspark.sql.functions import *
from pyspark.sql.window import Window
df.withColumn("time_intertweet", datediff(df.date, lag(df.date, 1)
.over(Window.partitionBy("user_id")
.orderBy("date"))))