Mah*_*afy 5 apache-spark apache-spark-sql apache-spark-dataset
我试图Dataset用Java 创建一个,所以我编写了以下代码:
public Dataset createDataset(){
List<Person> list = new ArrayList<>();
list.add(new Person("name", 10, 10.0));
Dataset<Person> dateset = sqlContext.createDataset(list, Encoders.bean(Person.class));
return dataset;
}
Run Code Online (Sandbox Code Playgroud)
Person class是一个内部类.
但是Spark引发了以下异常:
org.apache.spark.sql.AnalysisException: Unable to generate an encoder for inner class `....` without access to the scope that this class was defined in. Try moving this class out of its parent class.;
at org.apache.spark.sql.catalyst.encoders.ExpressionEncoder$$anonfun$2.applyOrElse(ExpressionEncoder.scala:264)
at org.apache.spark.sql.catalyst.encoders.ExpressionEncoder$$anonfun$2.applyOrElse(ExpressionEncoder.scala:260)
at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$3.apply(TreeNode.scala:243)
at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$3.apply(TreeNode.scala:243)
at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:53)
at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:242)
Run Code Online (Sandbox Code Playgroud)
怎么做得好?
Jac*_*ski 12
TL;博士(只有星火壳)定义你的case类第一,一旦它们被定义,使用它们.在Spark/Scala应用程序中使用案例类应该可行.
在Spark shell中的2.0.1中,您应首先定义case类,然后才能访问它们以创建一个Dataset.
$ ./bin/spark-shell --version
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/___/ .__/\_,_/_/ /_/\_\ version 2.1.0-SNAPSHOT
/_/
Using Scala version 2.11.8, Java HotSpot(TM) 64-Bit Server VM, 1.8.0_102
Branch master
Compiled by user jacek on 2016-10-25T04:20:04Z
Revision 483c37c581fedc64b218e294ecde1a7bb4b2af9c
Url https://github.com/apache/spark.git
Type --help for more information.
$ ./bin/spark-shell
scala> :pa
// Entering paste mode (ctrl-D to finish)
case class Person(id: Long)
Seq(Person(0)).toDS // <-- this won't work
// Exiting paste mode, now interpreting.
<console>:15: error: value toDS is not a member of Seq[Person]
Seq(Person(0)).toDS // <-- it won't work
^
scala> case class Person(id: Long)
defined class Person
scala> // the following implicit conversion *will* work
scala> Seq(Person(0)).toDS
res1: org.apache.spark.sql.Dataset[Person] = [id: bigint]
Run Code Online (Sandbox Code Playgroud)
在43ebf7a9cbd70d6af75e140a6fc91bf0ffc2b877提交(Spark 2.0.0-SNAPSHOT 3月21日)中,添加了解决方案以解决此问题.
在斯卡拉REPL我不得不添加OuterScopes.addOuterScope(this),同时:paste如下完整的片段:
scala> :pa
// Entering paste mode (ctrl-D to finish)
import sqlContext.implicits._
case class Token(name: String, productId: Int, score: Double)
val data = Token("aaa", 100, 0.12) ::
Token("aaa", 200, 0.29) ::
Token("bbb", 200, 0.53) ::
Token("bbb", 300, 0.42) :: Nil
org.apache.spark.sql.catalyst.encoders.OuterScopes.addOuterScope(this)
val ds = data.toDS
Run Code Online (Sandbox Code Playgroud)
解决方案是在方法的开头添加这段代码:
org.apache.spark.sql.catalyst.encoders.OuterScopes.addOuterScope(this);
Run Code Online (Sandbox Code Playgroud)
| 归档时间: |
|
| 查看次数: |
3701 次 |
| 最近记录: |