火花上的序列化异常

sup*_*han 7 scala serializable apache-spark

我在Spark遇到一个关于序列化的一个非常奇怪的问题.代码如下:

class PLSA(val sc : SparkContext, val numOfTopics : Int) extends Serializable
{
    def infer(document: RDD[Document]): RDD[DocumentParameter] = {
      val docs = documents.map(doc => DocumentParameter(doc, numOfTopics))
      docs
    }
}
Run Code Online (Sandbox Code Playgroud)

其中Document定义为:

class Document(val tokens: SparseVector[Int]) extends Serializable
Run Code Online (Sandbox Code Playgroud)

和DocumentParameter是:

class DocumentParameter(val document: Document, val theta: Array[Float]) extends Serializable

object DocumentParameter extends Serializable
{
  def apply(document: Document, numOfTopics: Int) = new DocumentParameter(document, 
    Array.ofDim[Float](numOfTopics))
}
Run Code Online (Sandbox Code Playgroud)

SparseVector是一个可序列化的类breeze.linalg.SparseVector.

这是一个简单的映射过程,所有类都是可序列化的,但是我得到了这个异常:

org.apache.spark.SparkException: Task not serializable
Run Code Online (Sandbox Code Playgroud)

但是当我删除numOfTopics参数时,即:

object DocumentParameter extends Serializable
{
  def apply(document: Document) = new DocumentParameter(document, 
    Array.ofDim[Float](10))
}
Run Code Online (Sandbox Code Playgroud)

并称之为:

val docs = documents.map(DocumentParameter.apply)
Run Code Online (Sandbox Code Playgroud)

似乎没问题.

Int类型不可序列化吗?但我确实看到一些代码是这样编写的.

我不知道如何修复这个bug.

#UPDATED#:

谢谢@samthebest.我将添加更多有关它的细节.

stack trace:
org.apache.spark.SparkException: Task not serializable
    at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:166)
    at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:158)
    at org.apache.spark.SparkContext.clean(SparkContext.scala:1242)
    at org.apache.spark.rdd.RDD.map(RDD.scala:270)
    at com.topicmodel.PLSA.infer(PLSA.scala:13)
    at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:30)
    at $iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:35)
    at $iwC$$iwC$$iwC$$iwC.<init>(<console>:37)
    at $iwC$$iwC$$iwC.<init>(<console>:39)
    at $iwC$$iwC.<init>(<console>:41)
    at $iwC.<init>(<console>:43)
    at <init>(<console>:45)
    at .<init>(<console>:49)
    at .<clinit>(<console>)
    at .<init>(<console>:7)
    at .<clinit>(<console>)
    at $print(<console>)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:483)
    at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:789)
    at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1062)
    at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:615)
    at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:646)
    at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:610)
    at org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:814)
    at org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:859)
    at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:771)
    at org.apache.spark.repl.SparkILoop.processLine$1(SparkILoop.scala:616)
    at org.apache.spark.repl.SparkILoop.innerLoop$1(SparkILoop.scala:624)
    at org.apache.spark.repl.SparkILoop.loop(SparkILoop.scala:629)
    at org.apache.spark.repl.SparkILoop$$anonfun$process$1.apply$mcZ$sp(SparkILoop.scala:954)
    at org.apache.spark.repl.SparkILoop$$anonfun$process$1.apply(SparkILoop.scala:902)
    at org.apache.spark.repl.SparkILoop$$anonfun$process$1.apply(SparkILoop.scala:902)
    at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)
    at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:902)
    at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:997)
    at org.apache.spark.repl.Main$.main(Main.scala:31)
    at org.apache.spark.repl.Main.main(Main.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:483)
    at org.apache.spark.deploy.SparkSubmit$.launch(SparkSubmit.scala:328)
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:75)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.io.NotSerializableException: org.apache.spark.SparkContext
    at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1184)
    at java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1548)
    at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1509)
    at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1432)
    at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1178)
    at java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1548)
    at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1509)
    at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1432)
    at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1178)
    at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:348)
    at org.apache.spark.serializer.JavaSerializationStream.writeObject(JavaSerializer.scala:42)
    at org.apache.spark.serializer.JavaSerializerInstance.serialize(JavaSerializer.scala:73)
    at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:164)
    ... 46 more
Run Code Online (Sandbox Code Playgroud)

由于堆栈跟踪提供了异常的一般信息,因此我将其删除.

我在spark-shell中运行代码.

// suppose I have get RDD[Document] for docs
val numOfTopics = 100
val plsa = new PLSA(sc, numOfTopics)
val docPara = plsa.infer(docs)
Run Code Online (Sandbox Code Playgroud)

你能给我一些关于可序列化的教程或技巧吗?

lmm*_*lmm 10

匿名函数序列化其包含的类.当你map {doc => DocumentParameter(doc, numOfTopics)},它可以授予该函数访问权限的唯一方法numOfTopics是序列化PLSA类.并且该类实际上不能被序列化,因为(正如您从堆栈跟踪中看到的)它包含SparkContext不可序列化的内容(如果各个集群节点可以访问上下文并且可以例如从内部创建新作业,则会发生错误一个映射器).

一般来说,尽量避免SparkContext在你的类中存储(编辑:或者至少,确保它非常清楚哪种类包含SparkContext哪种类,哪种类不包含); 最好将它作为(可能implicit)参数传递给需要它的各个方法.或者,将函数移动{doc => DocumentParameter(doc, numOfTopics)}到一个PLSA真正可以序列化的类中.

(正如多人建议的那样,可以保留SparkContext在课堂中,但标记为@transient不会被序列化.我不推荐这种方法;这意味着课程将"神奇地"在序列化时改变状态(失去SparkContext当你试图访问SparkContext序列化作业中的内部时,你最终可能会得到NPE .最好是在仅用于"控制"代码(并且可能使用SparkContext)和类的类之间保持明确的区别.被序列化以在集群上运行(必须没有SparkContext)).

  • 谢谢.它按照你的建议工作.另外,我找到了解决这个问题的另一种方法:在`val sc:SparkContext`之前添加`@ transient`,然后不会序列化`SparkContext`. (2认同)