"无法找到存储在数据集中的类型的编码器"甚至spark.implicits._是导入的?

ca9*_*3d9 1 scala apache-spark

Error:(39, 12) Unable to find encoder for type stored in a Dataset.  Primitive types (Int, String, etc) and Product types (case classes) are supported by importing spark.implicits._  Support for serializing other types will be added in future releases.
    dbo.map((r) => ods.map((s) => {

Error:(39, 12) not enough arguments for method map: (implicit evidence$6: org.apache.spark.sql.Encoder[org.apache.spark.sql.Dataset[Int]])org.apache.spark.sql.Dataset[org.apache.spark.sql.Dataset[Int]].
Unspecified value parameter evidence$6.
    dbo.map((r) => ods.map((s) => {
object Main extends App {
  ....

  def compare(sqlContext: org.apache.spark.sql.SQLContext, 
            dbo: Dataset[Cols], ods: Dataset[Cols]) = {
    import sqlContext.implicits._ // Tried import dbo.sparkSession.implicits._ too
    dbo.map((r) => ods.map((s) => { // Errors occur here
      0
    }))
}

case class Cols (A: Int,
                   B: String,
                   C: String,
                   D: String,
                   E: Double,
                   F: Date,
                   G: String,
                   H: String,
                   I: Double,
                   J: String
                  )
Run Code Online (Sandbox Code Playgroud)
  1. 导入后为什么还有错误sqlContext.implicits._
  2. 我创建了sqlContext仅用于导入的参数.没有传递参数,有更好的方法吗?

    这应该通过解决 import dbo.sparkSession.implicits._

Mik*_*nte 5

您的代码正在尝试创建数据集[Dataset [Int]],这有几个原因是错误的

如果要交叉来自2个数据集的数据,则无法在数据集中使用数据集

没有办法可以创建编码器[数据集[Int]],你可以有编码器[Int],但另一件事没有意义

这样的事情更有意义

import org.apache.spark.sql.functions => func

dbo.joinWith(ods, func.expr("true")).map {
  case (r, s) =>
    0
}
Run Code Online (Sandbox Code Playgroud)