Window 函数无法捕获所有行,跳过值为 'null' 的行

Window function not able to capture all rows, skipping the row with value 'null'

我的 window 函数没有捕获“Rate”= null 的第一行。因为只有一个国家代码所以应该是一个 window 对吧? 所有行中“新建”列的预期输出应为 20.519。

from pyspark.sql.functions import first
from pyspark.sql.window import Window
W = Window.partitionBy(DF.Country).orderBy(DF.Date.desc())
DF.select("*",first("Exchange_Rate",ignorenulls=True).over(W))

这是因为 windows 的框架被 Spark 隐式假定为 RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW(参见 Spark Gotchas)因此,对于 Date 202201 New 为空。因为它前面没有行有 non-null Rate。明确定义框架将解决问题。

data = [("202201", "MXN", None,),
        ("202112", "MXN", 20.519,),
        ("202111", "MXN", 21.364,),
        ("202111", "MXN", 21.364,), ]

DF = spark.createDataFrame(data, "Date:string,Country:string,Rate:Double")

from pyspark.sql.functions import first
from pyspark.sql.window import Window
W = Window.partitionBy(DF.Country).orderBy(DF.Date.desc()).rowsBetween(Window.unboundedPreceding, Window.unboundedFollowing)
DF.select("*",(first("Rate",ignorenulls=True).over(W)).alias("New")).show()

"""
+------+-------+------+------+
|  Date|Country|  Rate|   New|
+------+-------+------+------+
|202201|    MXN|  null|20.519|
|202112|    MXN|20.519|20.519|
|202111|    MXN|21.364|20.519|
|202111|    MXN|21.364|20.519|
+------+-------+------+------+
"""