pyspark 在该列上使用 groupby 之前更改该列的值

pyspark Change the value of a column before using groupby on that column

我有这个 json 数据,我想按小时汇总 'timestamp' 列,同时汇总 'b' 和 'a' 列中的数据。

{"a":1 , "b":1, "timestamp":"2017-01-26T01:14:55.719214Z"}
{"a":1 , "b":1,"timestamp":"2017-01-26T01:14:55.719214Z"}
{"a":1 , "b":1,"timestamp":"2017-01-26T02:14:55.719214Z"}
{"a":1 , "b":1,"timestamp":"2017-01-26T03:14:55.719214Z"}

这是我想要的最终输出

{"a":2 , "b":2, "timestamp":"2017-01-26T01:00:00"}
{"a":1 , "b":1,"timestamp":"2017-01-26T02:00:00"}
{"a":1 , "b":1,"timestamp":"2017-01-26T03:00:00"}

这是我到目前为止写的

df = spark.read.json(inputfile)
df2 = df.groupby("timestamp").agg(f.sum(df["a"],f.sum(df["b"])

但是在使用groupby函数之前,我应该如何改变'timestamp'列的值呢?提前致谢!

我想这是做到这一点的一种方法

df2 = df.withColumn("r_timestamp",df["r_timestamp"].substr(0,12)).groupby("timestamp").agg(f.sum(df["a"],f.sum(df["b"])

是否有更好的解决方案来获取所需格式的时间戳?

from pyspark.sql import functions as f   

df = spark.read.load(path='file:///home/zht/PycharmProjects/test/disk_file', format='json')
df = df.withColumn('ts', f.to_utc_timestamp(df['timestamp'], 'EST'))
win = f.window(df['ts'], windowDuration='1 hour')
df = df.groupBy(win).agg(f.sum(df['a']).alias('sumA'), f.sum(df['b']).alias('sumB'))
res = df.select(df['window']['start'].alias('start_time'), df['sumA'], df['sumB'])
res.show(truncate=False)

# output:
+---------------------+----+----+                                               
|start_time           |sumA|sumB|
+---------------------+----+----+
|2017-01-26 15:00:00.0|1   |1   |
|2017-01-26 16:00:00.0|1   |1   |
|2017-01-26 14:00:00.0|2   |2   |
+---------------------+----+----+

f.window更加灵活