Spark.sql 按 MAX 过滤行
Spark.sql Filter rows by MAX
下面是源文件的一部分,您可以想象它更大:
date,code1,postcode,cityname,total
2020-03-27,2011,X700,Curepipe,44
2020-03-29,2011,X700,Curepipe,44
2020-03-26,2011,X700,Curepipe,22
2020-03-27,2035,X920,vacoas,3
2020-03-25,2011,X920,vacoas,1
2020-03-24,2122,X760,souillac,22
2020-03-23,2122,X760,souillac,11
2020-03-22,2257,X760,souillac,10
2020-03-27,2480,X510,rosehill,21
2020-03-22,2035,X510,rosehill,7
2020-03-20,2035,X510,rosehill,3
以下代码后:
#Load data
from pyspark.sql import SparkSession
spark = SparkSession.builder.master("local").appName("source").getOrCreate()
dfcases = spark.read.format("csv").option("header", "true").load("sourcefile.csv")
dfcases.createOrReplaceTempView("tablecases")
spark.sql(XXXXXXXXXXXXX).show() #Mysql code to insert
我想得到这个结果:
Curepipe,X700,2020-03-27,44
Curepipe,X700,2020-03-29,44
souillac,X760,2020-03-24,22
rosehill,X510,2020-03-27,21
vacoas,X920,2020-03-27,3
目的是:
- Select 每个城市名称的总数达到 MAX 的日期(注意,如果一个城市在 2 个不同日期的总数达到 MAX,则可以出现两次),
- 按总降序排序,然后按日期升序排序,然后按城市名称升序排序。
谢谢!
以下查询生成您想要的输出
SELECT cityname, postcode, date, COUNT(*) AS total
FROM tablecases
GROUP BY cityname, postcode, date
HAVING COUNT(*) > 1
ORDER BY COUNT(*) DESC, date, cityname
演示在 db<>fiddle
您可以在请求中使用 SQL window 来获得结果,如下所示:
SELECT
cityname,
postcode,
date,
total
FROM
(SELECT
cityname,
postcode,
date,
total,
MAX(total) OVER (PARTITION BY cityname ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) AS max_total
FROM tablecases)
WHERE max_total = total
ORDER BY max_total DESC, date, cityname
下面是源文件的一部分,您可以想象它更大:
date,code1,postcode,cityname,total
2020-03-27,2011,X700,Curepipe,44
2020-03-29,2011,X700,Curepipe,44
2020-03-26,2011,X700,Curepipe,22
2020-03-27,2035,X920,vacoas,3
2020-03-25,2011,X920,vacoas,1
2020-03-24,2122,X760,souillac,22
2020-03-23,2122,X760,souillac,11
2020-03-22,2257,X760,souillac,10
2020-03-27,2480,X510,rosehill,21
2020-03-22,2035,X510,rosehill,7
2020-03-20,2035,X510,rosehill,3
以下代码后:
#Load data
from pyspark.sql import SparkSession
spark = SparkSession.builder.master("local").appName("source").getOrCreate()
dfcases = spark.read.format("csv").option("header", "true").load("sourcefile.csv")
dfcases.createOrReplaceTempView("tablecases")
spark.sql(XXXXXXXXXXXXX).show() #Mysql code to insert
我想得到这个结果:
Curepipe,X700,2020-03-27,44
Curepipe,X700,2020-03-29,44
souillac,X760,2020-03-24,22
rosehill,X510,2020-03-27,21
vacoas,X920,2020-03-27,3
目的是:
- Select 每个城市名称的总数达到 MAX 的日期(注意,如果一个城市在 2 个不同日期的总数达到 MAX,则可以出现两次),
- 按总降序排序,然后按日期升序排序,然后按城市名称升序排序。
谢谢!
以下查询生成您想要的输出
SELECT cityname, postcode, date, COUNT(*) AS total
FROM tablecases
GROUP BY cityname, postcode, date
HAVING COUNT(*) > 1
ORDER BY COUNT(*) DESC, date, cityname
演示在 db<>fiddle
您可以在请求中使用 SQL window 来获得结果,如下所示:
SELECT
cityname,
postcode,
date,
total
FROM
(SELECT
cityname,
postcode,
date,
total,
MAX(total) OVER (PARTITION BY cityname ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) AS max_total
FROM tablecases)
WHERE max_total = total
ORDER BY max_total DESC, date, cityname