如何在pyspark中按多列排序

How to order by multiple columns in pyspark

我有一个数据框:-

Price   sq.ft   constructed
15000   800     22/12/2019
80000   1200    25/12/2019
90000   1400    15/12/2019
70000   1000    10/11/2019
80000   1300    24/12/2019
15000   950     26/12/2019

我想一次对多列进行排序,虽然我得到了结果,但我正在寻找更好的方法来做到这一点。下面是我的代码:-

df.select("*",F.row_number().over(
    Window.partitionBy("Price").orderBy(col("Price").desc(),col("constructed").desc())).alias("Value")).display()
Price   sq.ft   constructed Value
15000   950   26/12/2019    1
15000   800   22/12/2019    2
70000   1000    10/11/2019  1
80000   1200    25/12/2019  1
80000   1300    24/12/2019  2
90000   1400    15/12/2019  1

与其每次都重复col("column name").desc(),有没有更好的方法呢? 我也尝试过以下方式:-

df.select("*",F.row_number().over(
    Window.partitionBy("Price").orderBy(["Price","constructed"],ascending = False).alias("Rank"))).display()

出现错误:-

TypeError: orderBy() got an unexpected keyword argument 'ascending'

您可以使用列表理解:

from pyspark.sql import functions as F, Window

Window.partitionBy("Price").orderBy(*[F.desc(c) for c in ["Price","constructed"]])