pyspark 按相应条件过滤行

pyspark filtering rows by corresponding condition

假设我有两个表:

df_1:
| condition | date           |
| --------  | -------------- |
| A         | 2018-01-01     |
| A         | 2018-01-02     |
| A         | 2018-01-03     |
| B         | 2018-04-04     |
| B         | 2018-04-05     |
| B         | 2018-04-06     |

df_2: 
| condition | date           |
| --------  | -------------- |
| A         | 2018-01-01     |
| B         | 2018-04-05     |

我想按 Table 2 中的日期筛选 Table 1,这样我只保留 df_1 的条目,该日期大于 [=] 中的对应日期22=],这是预期的输出:

| condition | date           |
| --------  | -------------- |
| A         | 2018-01-02     |
| A         | 2018-01-03     |
| B         | 2018-04-06     |

在 pandas 中执行此操作的一种方法是遍历 df_2

中的行
all_dfs=[]
for idx,row in df_2.iterrows():
    filtered_df = df_1[(df_1['condition']==row['condition'])&(df_1['date']>row['date'])]
    all_dfs.append(filtered_df)
final_df = pd.concat(all_dfs, axis=0)

如何在不涉及 for 循环的 pyspark 中执行此操作?

Spark 具有针对此用例的 left_semi 联接。

例子

from pyspark.sql.types import *
from pyspark.sql import Row
from datetime import datetime


schema = StructType([StructField('condition', StringType()), StructField('date',DateType())])
df_1_rows = [Row("A", datetime.strptime("2018-01-01", "%Y-%m-%d")),Row("A", datetime.strptime("2018-01-02", "%Y-%m-%d")),Row("A", datetime.strptime("2018-01-03", "%Y-%m-%d")),Row("B", datetime.strptime("2018-04-04", "%Y-%m-%d")),Row("B", datetime.strptime("2018-04-05", "%Y-%m-%d")),Row("B", datetime.strptime("2018-04-06", "%Y-%m-%d")),]
df_1 = spark.createDataFrame(df_1_rows, schema)

df_2_rows = [Row("A", datetime.strptime("2018-01-01", "%Y-%m-%d")),Row("B", datetime.strptime("2018-04-05", "%Y-%m-%d"))]
df_2 = spark.createDataFrame(df_2_rows, schema)

df_1.join(df_2, (df_1['condition'] == df_2['condition']) & (df_1['date'] > df_2['date']), "left_semi").show()

输出

+---------+----------+
|condition|      date|
+---------+----------+
|        A|2018-01-02|
|        A|2018-01-03|
|        B|2018-04-06|
+---------+----------+