如何根据列值是否在 Spark DataFrame 中的一组字符串中过滤行
How do I filter rows based on whether a column value is in a Set of Strings in a Spark DataFrame
是否有更优雅的方法来根据一组字符串中的值进行过滤?
def myFilter(actions: Set[String], myDF: DataFrame): DataFrame = {
val containsAction = udf((action: String) => {
actions.contains(action)
})
myDF.filter(containsAction('action))
}
在SQL你可以做到
select * from myTable where action in ('action1', 'action2', 'action3')
这个怎么样:
myDF.filter("action in (1,2)")
或
import org.apache.spark.sql.functions.lit
myDF.where($"action".in(Seq(1,2).map(lit(_)):_*))
或
import org.apache.spark.sql.functions.lit
myDF.where($"action".in(Seq(lit(1),lit(2)):_*))
Additional support will be added to make this cleaner in 1.5
是否有更优雅的方法来根据一组字符串中的值进行过滤?
def myFilter(actions: Set[String], myDF: DataFrame): DataFrame = {
val containsAction = udf((action: String) => {
actions.contains(action)
})
myDF.filter(containsAction('action))
}
在SQL你可以做到
select * from myTable where action in ('action1', 'action2', 'action3')
这个怎么样:
myDF.filter("action in (1,2)")
或
import org.apache.spark.sql.functions.lit
myDF.where($"action".in(Seq(1,2).map(lit(_)):_*))
或
import org.apache.spark.sql.functions.lit
myDF.where($"action".in(Seq(lit(1),lit(2)):_*))
Additional support will be added to make this cleaner in 1.5