wkl*_*wkl 5 scala apache-spark apache-spark-sql apache-spark-dataset apache-spark-encoders
我使用的Spark 2.2,并试图打电话的时候,我遇到了麻烦spark.createDataset上Seq的Map.
我的Spark Shell会话的代码和输出如下:
// createDataSet on Seq[T] where T = Int works
scala> spark.createDataset(Seq(1, 2, 3)).collect
res0: Array[Int] = Array(1, 2, 3)
scala> spark.createDataset(Seq(Map(1 -> 2))).collect
<console>:24: error: Unable to find encoder for type stored in a Dataset.
Primitive types (Int, String, etc) and Product types (case classes) are
supported by importing spark.implicits._
Support for serializing other types will be added in future releases.
spark.createDataset(Seq(Map(1 -> 2))).collect
^
// createDataSet on a custom case class containing Map works
scala> case class MapHolder(m: Map[Int, Int])
defined class MapHolder
scala> spark.createDataset(Seq(MapHolder(Map(1 -> 2)))).collect
res2: Array[MapHolder] = Array(MapHolder(Map(1 -> 2)))
Run Code Online (Sandbox Code Playgroud)
我试过了import spark.implicits._,虽然我很确定它是由Spark shell会话隐式导入的.
这是当前编码器未涵盖的情况吗?
它不在2.2中,但可以轻松解决.您可以显式地添加必需的Encoder使用ExpressionEncoder:
import org.apache.spark.sql.catalyst.encoders.ExpressionEncoder
import org.apache.spark.sql.Encoder
spark
.createDataset(Seq(Map(1 -> 2)))(ExpressionEncoder(): Encoder[Map[Int, Int]])
Run Code Online (Sandbox Code Playgroud)
或者implicitly:
implicit def mapIntIntEncoder: Encoder[Map[Int, Int]] = ExpressionEncoder()
spark.createDataset(Seq(Map(1 -> 2)))
Run Code Online (Sandbox Code Playgroud)
| 归档时间: |
|
| 查看次数: |
1319 次 |
| 最近记录: |