Bor*_*ris 10 arrays scala classcastexception apache-spark apache-spark-sql
Spark DataFrame包含Array [Double]类型的列.当我尝试在map()函数中将其返回时,它会抛出一个ClassCastException异常.以下Scala代码生成异常.
case class Dummy( x:Array[Double] )
val df = sqlContext.createDataFrame(Seq(Dummy(Array(1,2,3))))
val s = df.map( r => {
val arr:Array[Double] = r.getAs[Array[Double]]("x")
arr.sum
})
s.foreach(println)
Run Code Online (Sandbox Code Playgroud)
例外是
java.lang.ClassCastException: scala.collection.mutable.WrappedArray$ofRef cannot be cast to [D
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$anonfun$1.apply(<console>:24)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$anonfun$1.apply(<console>:23)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
at scala.collection.Iterator$class.foreach(Iterator.scala:727)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
at org.apache.spark.rdd.RDD$$anonfun$foreach$1$$anonfun$apply$28.apply(RDD.scala:890)
at org.apache.spark.rdd.RDD$$anonfun$foreach$1$$anonfun$apply$28.apply(RDD.scala:890)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1848)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1848)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:88)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Run Code Online (Sandbox Code Playgroud)
Cam有人解释我为什么不起作用?我该怎么做呢?我使用Spark 1.5.1和scala 2.10.6
谢谢
zer*_*323 22
ArrayType
在被表示Row
为scala.collection.mutable.WrappedArray
.例如,您可以使用它来提取它
val arr: Seq[Double] = r.getAs[Seq[Double]]("x")
Run Code Online (Sandbox Code Playgroud)
要么
val i: Int = ???
val arr = r.getSeq[Double](i)
Run Code Online (Sandbox Code Playgroud)
甚至:
import scala.collection.mutable.WrappedArray
val arr: WrappedArray[Double] = r.getAs[WrappedArray[Double]]("x")
Run Code Online (Sandbox Code Playgroud)
如果DataFrame
相对较薄,则模式匹配可能是更好的方法:
import org.apache.spark.sql.Row
df.rdd.map{case Row(x: Seq[Double]) => (x.toArray, x.sum)}
Run Code Online (Sandbox Code Playgroud)
虽然你必须记住,序列的类型是未选中的.
在Spark> = 1.6中,您还可以使用Dataset
如下:
df.select("x").as[Seq[Double]].rdd
Run Code Online (Sandbox Code Playgroud)