yh1*_*190 6 java hadoop scala apache-spark
当我尝试在RDD[(Int,ArrayBuffer[(Int,Double)])]
输入上应用方法(ComputeDwt)时,我面临异常.我甚至使用extends Serialization
选项来序列化spark中的对象.这是代码片段.
input:series:RDD[(Int,ArrayBuffer[(Int,Double)])]
DWTsample extends Serialization is a class having computeDwt function.
sc: sparkContext
val kk:RDD[(Int,List[Double])]=series.map(t=>(t._1,new DWTsample().computeDwt(sc,t._2)))
Error:
org.apache.spark.SparkException: Job failed: java.io.NotSerializableException: org.apache.spark.SparkContext
org.apache.spark.SparkException: Job failed: java.io.NotSerializableException: org.apache.spark.SparkContext
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:760)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:758)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:60)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:758)
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitMissingTasks(DAGScheduler.scala:556)
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitStage(DAGScheduler.scala:503)
at org.apache.spark.scheduler.DAGScheduler.processEvent(DAGScheduler.scala:361)
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$run(DAGScheduler.scala:441)
at org.apache.spark.scheduler.DAGScheduler$$anon$1.run(DAGScheduler.scala:149)
Run Code Online (Sandbox Code Playgroud)
任何人都可以建议我可能是什么问题以及应该采取什么措施来克服这个问题?
Jos*_*sen 15
这条线
series.map(t=>(t._1,new DWTsample().computeDwt(sc,t._2)))
Run Code Online (Sandbox Code Playgroud)
引用SparkContext(sc
)但SparkContext不可序列化.SparkContext旨在公开在驱动程序上运行的操作; 它不能被在worker上运行的代码引用/使用.
您必须重新构造代码,以便sc
在map函数闭包中不引用.
归档时间: |
|
查看次数: |
5410 次 |
最近记录: |