PySpark:PartitionBy 在多次分区的列中保留相同的值

PySpark: PartitionBy leaves the same value in column by which partitioned multiple times

我需要 partitionBy 以便在 timematch_instatid 列中获得不同的值,但它只产生大约一半时间的不同值

window_match_time_priority = Window.partitionBy(col("match_instatid"),col("time")).orderBy(col("match_instatid"),col("time"), priority_udf(col("type")).desc())
with_owner = match.select('match_instatid', "time", "type",
                F.last(col("team_instatid")).over(window_match_time_priority).alias('last_team'),                                                                   
                   F.last(col("type")).over(window_match_time_priority).alias('last_action')) \
                   .withColumn("owner", owner_assignment_udf(col("last_team"), col("last_action")))

您可以看到 last_action 列仅针对具有相同时间的某些行重复,但应该针对所有行。 所有者和每个唯一时间值 last_action 应该只有一个值

试试这个作为 window。要使 F.last 正常工作,window 必须是无界的。 F.first 在不受限制的情况下工作。

window_match_time_priority = Window.partitionBy(col("match_instatid"),col("time")).orderBy(col("match_instatid"),col("time"), priority_udf(col("type")).desc())\
.rowsBetween(Window.unboundedPreceding, Window.unboundedFollowing)