java.lang.ClassCastException使用远程服务器上的spark作业中的lambda表达式

Meh*_*ban 22 java lambda java-8 spark-java

我正在尝试使用sparkjava.com框架为我的apache spark作业构建一个web api.我的代码是:

@Override
public void init() {
    get("/hello",
            (req, res) -> {
                String sourcePath = "hdfs://spark:54310/input/*";

                SparkConf conf = new SparkConf().setAppName("LineCount");
                conf.setJars(new String[] { "/home/sam/resin-4.0.42/webapps/test.war" });
                File configFile = new File("config.properties");

                String sparkURI = "spark://hamrah:7077";

                conf.setMaster(sparkURI);
                conf.set("spark.driver.allowMultipleContexts", "true");
                JavaSparkContext sc = new JavaSparkContext(conf);

                @SuppressWarnings("resource")
                JavaRDD<String> log = sc.textFile(sourcePath);

                JavaRDD<String> lines = log.filter(x -> {
                    return true;
                });

                return lines.count();
            });
}
Run Code Online (Sandbox Code Playgroud)

如果我删除lambda表达式或将其放在一个简单的jar而不是web服务(不知何故是一个servlet)中,它将运行而没有任何错误.但是在servlet中使用lambda表达式将导致此异常:

15/01/28 10:36:33 WARN TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0, hamrah): java.lang.ClassCastException: cannot assign instance of java.lang.invoke.SerializedLambda to field org.apache.spark.api.java.JavaRDD$$anonfun$filter$1.f$1 of type org.apache.spark.api.java.function.Function in instance of org.apache.spark.api.java.JavaRDD$$anonfun$filter$1
at java.io.ObjectStreamClass$FieldReflector.setObjFieldValues(ObjectStreamClass.java:2089)
at java.io.ObjectStreamClass.setObjFieldValues(ObjectStreamClass.java:1261)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1999)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1918)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1801)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1993)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1918)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1801)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1993)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1918)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1801)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:371)
at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:62)
at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:87)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:57)
at org.apache.spark.scheduler.Task.run(Task.scala:56)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:196)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Run Code Online (Sandbox Code Playgroud)

PS:我尝试过jerseypark与jetty,tomcat和resin的组合,所有这些都让我得到了同样的结果.

Hol*_*ger 41

你有什么,是一个掩盖原始错误的后续错误.

当lambda实例被序列化时,它们用于writeReplace从作为SerializedLambda 实例的持久形式中解散其JRE特定实现.当SerializedLambda实例已恢复,它的readResolve方法将被调用来重建适当的拉姆达实例.正如文档所说,它将通过调用定义原始lambda的类的特殊方法来实现(请参阅此答案).重要的是,需要原始课程,这就是你的案例中缺少的课程.

但是......有......特别......行为ObjectInputStream.遇到异常时,它不会立即纾困.它将记录异常并继续进程,标记当前正在读取的所有对象,因此依赖于错误对象也是错误的.只有在流程结束时才会抛出它遇到的原始异常.令它如此奇怪的是,它还将继续尝试设置这些对象的字段.但是当你看到方法ObjectInputStream.readOrdinaryObject行1806时:

…
    if (obj != null &&
        handles.lookupException(passHandle) == null &&
        desc.hasReadResolveMethod())
    {
        Object rep = desc.invokeReadResolve(obj);
        if (unshared && rep.getClass().isArray()) {
            rep = cloneArray(rep);
        }
        if (rep != obj) {
            handles.setObject(passHandle, obj = rep);
        }
    }

    return obj;
}
Run Code Online (Sandbox Code Playgroud)

您会看到readResolvelookupException报告非null异常时它不会调用该方法.但是当替换没有发生时,继续尝试设置引用者的字段值并不是一个好主意,但这正是在这里发生的事情,因此产生了一个ClassCastException.

您可以轻松地重现问题:

public class Holder implements Serializable {
    Runnable r;
}
public class Defining {
    public static Holder get() {
        final Holder holder = new Holder();
        holder.r=(Runnable&Serializable)()->{};
        return holder;
    }
}
public class Writing {
    static final File f=new File(System.getProperty("java.io.tmpdir"), "x.ser");
    public static void main(String... arg) throws IOException {
        try(FileOutputStream os=new FileOutputStream(f);
            ObjectOutputStream   oos=new ObjectOutputStream(os)) {
            oos.writeObject(Defining.get());
        }
        System.out.println("written to "+f);
    }
}
public class Reading {
    static final File f=new File(System.getProperty("java.io.tmpdir"), "x.ser");
    public static void main(String... arg) throws IOException, ClassNotFoundException {
        try(FileInputStream is=new FileInputStream(f);
            ObjectInputStream ois=new ObjectInputStream(is)) {
            Holder h=(Holder)ois.readObject();
            System.out.println(h.r);
            h.r.run();
        }
        System.out.println("read from "+f);
    }
}
Run Code Online (Sandbox Code Playgroud)

编译这四个类并运行Writing.然后删除类文件Defining.class并运行Reading.然后你会得到一个

Exception in thread "main" java.lang.ClassCastException: cannot assign instance of java.lang.invoke.SerializedLambda to field test.Holder.r of type java.lang.Runnable in instance of test.Holder
    at java.io.ObjectStreamClass$FieldReflector.setObjFieldValues(ObjectStreamClass.java:2089)
    at java.io.ObjectStreamClass.setObjFieldValues(ObjectStreamClass.java:1261)
Run Code Online (Sandbox Code Playgroud)

(经过1.8.0_20测试)


最重要的是,一旦了解了正在发生的事情,你可能会忘记这个序列化问题,你要解决的问题就是确保定义lambda表达式的类在lambda所在的运行时也可用.反序列化.

Spark Job直接从IDE运行的示例(默认情况下spark-submit分配jar):

SparkConf sconf = new SparkConf()
  .set("spark.eventLog.dir", "hdfs://nn:8020/user/spark/applicationHistory")
  .set("spark.eventLog.enabled", "true")
  .setJars(new String[]{"/path/to/jar/with/your/class.jar"})
  .setMaster("spark://spark.standalone.uri:7077");
Run Code Online (Sandbox Code Playgroud)

  • @ѕтƒ如果您从IDE运行代码,只需在您的SparkConf实例上调用`setJars(new String [] {"/ path/to/jar/with/your/class.jar"})`.`spark-submit`默认分配你的jar,所以没有这样的问题 (5认同)
  • @George 我没有参与 Spark 的开发,所以我不知道。但是我可以想象,shell 会记住您输入的定义,这是一种可转移的形式,足以复制另一侧的代码。所以只有shell的代码。其后端的一部分需要在另一侧可用。 (2认同)

Adr*_*ith 5

我遇到了同样的错误,我用内部类替换了 lambda,然后它就工作了。我真的不明白为什么,并且重现这个错误非常困难(我们有一台服务器表现出这种行为,而没有其他服务器)。

不起作用

this.variable = () -> { ..... }
Run Code Online (Sandbox Code Playgroud)

产量java.lang.ClassCastException: cannot assign instance of java.lang.invoke.SerializedLambda to field MyObject.val$variable

作品

this.variable = new MyInterface() {
    public void myMethod() {
       .....
    }
};
Run Code Online (Sandbox Code Playgroud)