查找值落在 PySpark Dataframe 中特定列之间的所有列的列表

Find list of all columns whose value fall between specific columns in PySpark Dataframe

我有 Spark DF,它由 20 列组成,我想从中找出哪个列的值落在 HighLow 列值之间。

Time,8,7,6,5,4,3,2,1,0,-1,-2,-3,-4,-5,-6,-7,-8,High,Low
09:16,930.9476296,927.4296671,924.1894385,923.2636589,921.6898335,920.578898,919.4679625,918.171871,915.95,913.728129,912.4320375,911.321102,910.2101665,908.6363411,907.7105615,904.4703329,900.9523704,919.95,917.65

我尝试了以下命令,但出现错误:

joineddata.withColumn('RR', map(lambda x: [x], ((F.col(x) >= (F.col('Low')) & (F.col(x) <= (F.col('High')) for x in joineddata.columns[1:18]))))).show()

错误:

TypeError: Column is not iterable

想要的结果:

我想要一个新列,它是一个列名列表,其值介于 HighLow 列之间。

Time,8,7,6,5,4,3,2,1,0,-1,-2,-3,-4,-5,-6,-7,-8,High,Low,RR
09:16,930.9476296,927.4296671,924.1894385,923.2636589,921.6898335,920.578898,919.4679625,918.171871,915.95,913.728129,912.4320375,911.321102,910.2101665,908.6363411,907.7105615,904.4703329,900.9523704,919.95,917.65,[2,1]

只需使用when and between收集数组中的列名来检查列是否满足条件,然后过滤结果数组以删除空值(不满足条件的列):

df = joineddata.withColumn('RR', array(*[when(col(c).between(col('Low'), col("High")), lit(c)) for c in df.columns[1:18]]))\
               .withColumn('RR', expr("filter(RR, x -> x is not null)"))

df.select("Time", "RR").show()

#+-----+------+
#| Time|    RR|
#+-----+------+
#|09:16|[2, 1]|
#+-----+------+

请注意,在第二步中使用了 filter 函数,该函数仅适用于 Spark 2.4+。对于旧版本,您可以使用 UDF。