pyspark dataframe 检查字符串是否包含子字符串

pyspark dataframe check if string contains substring

我需要帮助将以下 Python 逻辑实施到 Pyspark 数据帧中。

Python: df1['isRT'] = df1['main_string'].str.lower().str.contains('|'.join(df2['sub_string'].str.lower()))

df1.show()

+--------+---------------------------+
|id      |    main_string            |
+--------+---------------------------+
|  1     |    i am a boy             |
|  2     |    i am from london       |
|  3     |    big data hadoop        |
|  4     |    always be happy        |
|  5     |    software and hardware  |
+--------+---------------------------+

df2.show()

+--------+---------------------------+
|id      |    sub_string             |
+--------+---------------------------+
|  1     |        happy              |
|  2     |        xxxx               |
|  3     |    i am a boy             |
|  4     |        yyyy               |
|  5     |    from london            |
+--------+---------------------------+

最终输出: df1.show()

+--------+---------------------------+--------+
|id      |    main_string            | isRT   |
+--------+---------------------------+--------+
|  1     |    i am a boy             |  True  |
|  2     |    i am from london       |  True  |
|  3     |    big data hadoop        |  False |
|  4     |    always be happy        |  True  |
|  5     |    software and hardware  |  False |
+--------+---------------------------+--------+

首先构建子串列表substr_list,然后使用rlike函数生成isRT

df3 = df2.select(F.expr('collect_list(lower(sub_string))').alias('substr'))
substr_list = '|'.join(df3.first()[0])
df = df1.withColumn('isRT', F.expr(f'lower(main_string) rlike "{substr_list}"'))
df.show(truncate=False)

对于你的两个数据框,

df1 = spark.createDataFrame(['i am a boy', 'i am from london', 'big data hadoop', 'always be happy', 'software and hardware'], 'string').toDF('main_string')
df1.show(truncate=False)
df2 = spark.createDataFrame(['happy', 'xxxx', 'i am a boy', 'yyyy', 'from london'], 'string').toDF('sub_string')
df2.show(truncate=False)

+---------------------+
|main_string          |
+---------------------+
|i am a boy           |
|i am from london     |
|big data hadoop      |
|always be happy      |
|software and hardware|
+---------------------+

+-----------+
|sub_string |
+-----------+
|happy      |
|xxxx       |
|i am a boy |
|yyyy       |
|from london|
+-----------+

你可以用简单的连接表达式得到以下结果。

from pyspark.sql import functions as f

df1.join(df2, f.col('main_string').contains(f.col('sub_string')), 'left') \
   .withColumn('isRT', f.expr('if(sub_string is null, False, True)')) \
   .drop('sub_string') \
   .show()

+--------------------+-----+
|         main_string| isRT|
+--------------------+-----+
|          i am a boy| true|
|    i am from london| true|
|     big data hadoop|false|
|     always be happy| true|
|software and hard...|false|
+--------------------+-----+