Spark 中的数组交集 SQL
Array Intersection in Spark SQL
我有一个 table,它有一个名为 writer
的数组类型列,它的值类似于 array[value1, value2]
、array[value2, value3]
...等
我正在做 self join
以获得数组之间具有共同值的结果。我试过了:
sqlContext.sql("SELECT R2.writer FROM table R1 JOIN table R2 ON R1.id != R2.id WHERE ARRAY_INTERSECTION(R1.writer, R2.writer)[0] is not null ")
和
sqlContext.sql("SELECT R2.writer FROM table R1 JOIN table R2 ON R1.id != R2.id WHERE ARRAY_INTERSECT(R1.writer, R2.writer)[0] is not null ")
但遇到同样的异常:
Exception in thread "main" org.apache.spark.sql.AnalysisException:
Undefined function: 'ARRAY_INTERSECT'. This function is neither a
registered temporary function nor a permanent function registered in
the database 'default'.; line 1 pos 80
可能Spark SQL不支持ARRAY_INTERSECTION
和ARRAY_INTERSECT
。我怎样才能在 Spark SQL
中实现我的目标?
你需要一个 udf:
import org.apache.spark.sql.functions.udf
spark.udf.register("array_intersect",
(xs: Seq[String], ys: Seq[String]) => xs.intersect(ys))
然后检查交集是否为空:
scala> spark.sql("SELECT size(array_intersect(array('1', '2'), array('3', '4'))) = 0").show
+-----------------------------------------+
|(size(UDF(array(1, 2), array(3, 4))) = 0)|
+-----------------------------------------+
| true|
+-----------------------------------------+
scala> spark.sql("SELECT size(array_intersect(array('1', '2'), array('1', '4'))) = 0").show
+-----------------------------------------+
|(size(UDF(array(1, 2), array(1, 4))) = 0)|
+-----------------------------------------+
| false|
+-----------------------------------------+
自 Spark 2.4 array_intersect
函数可以直接在 SQL
中使用
spark.sql(
"SELECT array_intersect(array(1, 42), array(42, 3)) AS intersection"
).show
+------------+
|intersection|
+------------+
| [42]|
+------------+
和Dataset
API:
import org.apache.spark.sql.functions.array_intersect
Seq((Seq(1, 42), Seq(42, 3)))
.toDF("a", "b")
.select(array_intersect($"a", $"b") as "intersection")
.show
+------------+
|intersection|
+------------+
| [42]|
+------------+
来宾语言中也存在等效功能:
pyspark.sql.functions.array_intersect
在 PySpark 中。
SparkR::array_intersect
在 SparkR 中。
我有一个 table,它有一个名为 writer
的数组类型列,它的值类似于 array[value1, value2]
、array[value2, value3]
...等
我正在做 self join
以获得数组之间具有共同值的结果。我试过了:
sqlContext.sql("SELECT R2.writer FROM table R1 JOIN table R2 ON R1.id != R2.id WHERE ARRAY_INTERSECTION(R1.writer, R2.writer)[0] is not null ")
和
sqlContext.sql("SELECT R2.writer FROM table R1 JOIN table R2 ON R1.id != R2.id WHERE ARRAY_INTERSECT(R1.writer, R2.writer)[0] is not null ")
但遇到同样的异常:
Exception in thread "main" org.apache.spark.sql.AnalysisException: Undefined function: 'ARRAY_INTERSECT'. This function is neither a registered temporary function nor a permanent function registered in the database 'default'.; line 1 pos 80
可能Spark SQL不支持ARRAY_INTERSECTION
和ARRAY_INTERSECT
。我怎样才能在 Spark SQL
中实现我的目标?
你需要一个 udf:
import org.apache.spark.sql.functions.udf
spark.udf.register("array_intersect",
(xs: Seq[String], ys: Seq[String]) => xs.intersect(ys))
然后检查交集是否为空:
scala> spark.sql("SELECT size(array_intersect(array('1', '2'), array('3', '4'))) = 0").show
+-----------------------------------------+
|(size(UDF(array(1, 2), array(3, 4))) = 0)|
+-----------------------------------------+
| true|
+-----------------------------------------+
scala> spark.sql("SELECT size(array_intersect(array('1', '2'), array('1', '4'))) = 0").show
+-----------------------------------------+
|(size(UDF(array(1, 2), array(1, 4))) = 0)|
+-----------------------------------------+
| false|
+-----------------------------------------+
自 Spark 2.4 array_intersect
函数可以直接在 SQL
spark.sql(
"SELECT array_intersect(array(1, 42), array(42, 3)) AS intersection"
).show
+------------+
|intersection|
+------------+
| [42]|
+------------+
和Dataset
API:
import org.apache.spark.sql.functions.array_intersect
Seq((Seq(1, 42), Seq(42, 3)))
.toDF("a", "b")
.select(array_intersect($"a", $"b") as "intersection")
.show
+------------+
|intersection|
+------------+
| [42]|
+------------+
来宾语言中也存在等效功能:
pyspark.sql.functions.array_intersect
在 PySpark 中。SparkR::array_intersect
在 SparkR 中。