如何创建地图数据集?
How to create a Dataset of Maps?
我正在使用 Spark 2.2,但在 Map
的 Seq
上尝试调用 spark.createDataset
时遇到了麻烦 运行。
我的 Spark Shell 会话的代码和输出如下:
// createDataSet on Seq[T] where T = Int works
scala> spark.createDataset(Seq(1, 2, 3)).collect
res0: Array[Int] = Array(1, 2, 3)
scala> spark.createDataset(Seq(Map(1 -> 2))).collect
<console>:24: error: Unable to find encoder for type stored in a Dataset.
Primitive types (Int, String, etc) and Product types (case classes) are
supported by importing spark.implicits._
Support for serializing other types will be added in future releases.
spark.createDataset(Seq(Map(1 -> 2))).collect
^
// createDataSet on a custom case class containing Map works
scala> case class MapHolder(m: Map[Int, Int])
defined class MapHolder
scala> spark.createDataset(Seq(MapHolder(Map(1 -> 2)))).collect
res2: Array[MapHolder] = Array(MapHolder(Map(1 -> 2)))
我试过 import spark.implicits._
,但我相当确定这是由 Spark shell 会话隐式导入的。
这是当前编码器未涵盖的情况吗?
2.2 中没有涉及,但很容易解决。您可以使用 ExpressionEncoder
添加必需的 Encoder
,或者明确地:
import org.apache.spark.sql.catalyst.encoders.ExpressionEncoder
import org.apache.spark.sql.Encoder
spark
.createDataset(Seq(Map(1 -> 2)))(ExpressionEncoder(): Encoder[Map[Int, Int]])
或implicitly
:
implicit def mapIntIntEncoder: Encoder[Map[Int, Int]] = ExpressionEncoder()
spark.createDataset(Seq(Map(1 -> 2)))
仅供参考,上面的表达式仅适用于 Spark 2.3(如果我没记错的话,从 this commit 开始)。
scala> spark.version
res0: String = 2.3.0
scala> spark.createDataset(Seq(Map(1 -> 2))).collect
res1: Array[scala.collection.immutable.Map[Int,Int]] = Array(Map(1 -> 2))
我认为这是因为 newMapEncoder
现在是 spark.implicits
的一部分。
scala> :implicits
...
implicit def newMapEncoder[T <: scala.collection.Map[_, _]](implicit evidence: reflect.runtime.universe.TypeTag[T]): org.apache.spark.sql.Encoder[T]
您可以 "disable" 通过使用以下技巧隐式并尝试上面的表达式(这将导致错误)。
trait ThatWasABadIdea
implicit def newMapEncoder(ack: ThatWasABadIdea) = ack
scala> spark.createDataset(Seq(Map(1 -> 2))).collect
<console>:26: error: Unable to find encoder for type stored in a Dataset. Primitive types (Int, String, etc) and Product types (case classes) are supported by importing spark.implicits._ Support for serializing other types will be added in future releases.
spark.createDataset(Seq(Map(1 -> 2))).collect
^
我正在使用 Spark 2.2,但在 Map
的 Seq
上尝试调用 spark.createDataset
时遇到了麻烦 运行。
我的 Spark Shell 会话的代码和输出如下:
// createDataSet on Seq[T] where T = Int works
scala> spark.createDataset(Seq(1, 2, 3)).collect
res0: Array[Int] = Array(1, 2, 3)
scala> spark.createDataset(Seq(Map(1 -> 2))).collect
<console>:24: error: Unable to find encoder for type stored in a Dataset.
Primitive types (Int, String, etc) and Product types (case classes) are
supported by importing spark.implicits._
Support for serializing other types will be added in future releases.
spark.createDataset(Seq(Map(1 -> 2))).collect
^
// createDataSet on a custom case class containing Map works
scala> case class MapHolder(m: Map[Int, Int])
defined class MapHolder
scala> spark.createDataset(Seq(MapHolder(Map(1 -> 2)))).collect
res2: Array[MapHolder] = Array(MapHolder(Map(1 -> 2)))
我试过 import spark.implicits._
,但我相当确定这是由 Spark shell 会话隐式导入的。
这是当前编码器未涵盖的情况吗?
2.2 中没有涉及,但很容易解决。您可以使用 ExpressionEncoder
添加必需的 Encoder
,或者明确地:
import org.apache.spark.sql.catalyst.encoders.ExpressionEncoder
import org.apache.spark.sql.Encoder
spark
.createDataset(Seq(Map(1 -> 2)))(ExpressionEncoder(): Encoder[Map[Int, Int]])
或implicitly
:
implicit def mapIntIntEncoder: Encoder[Map[Int, Int]] = ExpressionEncoder()
spark.createDataset(Seq(Map(1 -> 2)))
仅供参考,上面的表达式仅适用于 Spark 2.3(如果我没记错的话,从 this commit 开始)。
scala> spark.version
res0: String = 2.3.0
scala> spark.createDataset(Seq(Map(1 -> 2))).collect
res1: Array[scala.collection.immutable.Map[Int,Int]] = Array(Map(1 -> 2))
我认为这是因为 newMapEncoder
现在是 spark.implicits
的一部分。
scala> :implicits
...
implicit def newMapEncoder[T <: scala.collection.Map[_, _]](implicit evidence: reflect.runtime.universe.TypeTag[T]): org.apache.spark.sql.Encoder[T]
您可以 "disable" 通过使用以下技巧隐式并尝试上面的表达式(这将导致错误)。
trait ThatWasABadIdea
implicit def newMapEncoder(ack: ThatWasABadIdea) = ack
scala> spark.createDataset(Seq(Map(1 -> 2))).collect
<console>:26: error: Unable to find encoder for type stored in a Dataset. Primitive types (Int, String, etc) and Product types (case classes) are supported by importing spark.implicits._ Support for serializing other types will be added in future releases.
spark.createDataset(Seq(Map(1 -> 2))).collect
^