Apache Spark - 数据集操作在抽象基类中失败了吗?

kpo*_*kpo 5 abstract-class scala apache-spark

我正在尝试将一些常见代码提取到抽象类中,但遇到了问题.

假设我正在读取格式为"id | name"的文件:

case class Person(id: Int, name: String) extends Serializable

object Persons {
  def apply(lines: Dataset[String]): Dataset[Person] = {
    import lines.sparkSession.implicits._
    lines.map(line => {
      val fields = line.split("\\|")
      Person(fields(0).toInt, fields(1))
    })
  }
}

Persons(spark.read.textFile("persons.txt")).show()
Run Code Online (Sandbox Code Playgroud)

大.这很好用.现在让我们说我想用"名称"字段读取许多不同的文件,因此我将提取出所有常见的逻辑:

trait Named extends Serializable { val name: String }

abstract class NamedDataset[T <: Named] {
  def createRecord(fields: Array[String]): T
  def apply(lines: Dataset[String]): Dataset[T] = {
    import lines.sparkSession.implicits._
    lines.map(line => createRecord(line.split("\\|")))
  }
}

case class Person(id: Int, name: String) extends Named

object Persons extends NamedDataset[Person] {
  override def createRecord(fields: Array[String]) =
    Person(fields(0).toInt, fields(1))
}
Run Code Online (Sandbox Code Playgroud)

这失败了两个错误:

Error:
Unable to find encoder for type stored in a Dataset.  
Primitive types (Int, String, etc) and Product types (case classes) 
are supported by importing spark.implicits._  Support for serializing 
other types will be added in future releases.
lines.map(line => createRecord(line.split("\\|")))

Error:
not enough arguments for method map: 
(implicit evidence$7: org.apache.spark.sql.Encoder[T])org.apache.spark.sql.Dataset[T].
Unspecified value parameter evidence$7.
lines.map(line => createRecord(line.split("\\|")))
Run Code Online (Sandbox Code Playgroud)

我有一种感觉这与implicits,TypeTags和/或ClassTags有关,但我刚开始使用Scala并且还没有完全理解这些概念.

Tza*_*har 7

你必须做两个小改动:

  • 由于只Product支持基元和s(作为错误消息状态),因此制作Named特征Serializable是不够的.你应该扩展它Product(这意味着case类和元组可以扩展它)
  • 事实上,这两个ClassTagTypeTag所需要的火花,克服类型擦除,并找出实际的类型

所以 - 这是一个工作版本:

import scala.reflect.ClassTag
import scala.reflect.runtime.universe.TypeTag

trait Named extends Product { val name: String }

abstract class NamedDataset[T <: Named : ClassTag : TypeTag] extends Serializable {
  def createRecord(fields: Array[String]): T
  def apply(lines: Dataset[String]): Dataset[T] = {
    import lines.sparkSession.implicits._
    lines.map(line => createRecord(line.split("\\|")))
  }
}

case class Person(id: Int, name: String) extends Named

object Persons extends NamedDataset[Person] {
  override def createRecord(fields: Array[String]) =
    Person(fields(0).toInt, fields(1))
}
Run Code Online (Sandbox Code Playgroud)