类型 HashPartitioner 不是 org.apache.spark.sql.SparkSession 的成员

type HashPartitioner is not a member of org.apache.spark.sql.SparkSession

我正在使用 spark-shell 来试验 Spark 的 HashPartitioner。报错如下:

scala> val data = sc.parallelize(List((1, 3), (2, 4), (3, 6), (3, 7)))
data: org.apache.spark.rdd.RDD[(Int, Int)] = ParallelCollectionRDD[0] at parallelize at <console>:24

scala> val partitionedData = data.partitionBy(new spark.HashPartitioner(2))
<console>:26: error: type HashPartitioner is not a member of org.apache.spark.sql.SparkSession
       val partitionedData = data.partitionBy(new spark.HashPartitioner(2))
                                                        ^

scala> val partitionedData = data.partitionBy(new org.apache.spark.HashPartitioner(2))
partitionedData: org.apache.spark.rdd.RDD[(Int, Int)] = ShuffledRDD[1] at partitionBy at <console>:26

第二次操作失败,第三次操作成功。为什么 spark-shell 会在 org.apache.spark.sql.SparkSession 的包中寻找 spark.HashPartitioner 而不是 org.apache.spark?

spark 是一个 SparkSession 对象而不是 org.apache.spark 包。

您应该导入 org.apache.spark.HashPartitioner 或使用完整的 class 名称,例如:

import org.apache.spark.HashPartitioner

val partitionedData = data.partitionBy(new HashPartitioner(2))