这是有效的。
object FilesToDFDS {
case class Student(id: Int, name: String, dept:String)
def main(args: Array[String]): Unit = {
val ss = SparkSession.builder().appName("local").master("local[*]").getOrCreate()
import ss.implicits._
val path = "data.txt"
val rdd = ss.sparkContext.textFile(path).map(x => x.split(" ")).map(x => Student(x(0).toInt,x(1),x(2)))
val df = ss.read.format("csv").option("delimiter", " ").load(path).map(x => Student(x.getString(0).toInt ,x.getString(1),x.getString(2)))
val ds = ss.read.textFile(path).map(x => x.split(" ")).map(x => Student(x(0).toInt,x(1),x(2)))
val rddToDF = ss.sqlContext.createDataFrame(rdd)
}
}
Run Code Online (Sandbox Code Playgroud)
但是,如果 case 类移动到 main 内部,df
则会ds
出现编译错误。
Unable to find encoder for type stored in a Dataset. Primitive …
Run Code Online (Sandbox Code Playgroud)