use*_*400 9 java apache-spark spark-cassandra-connector
这个错误是最难追查的.我不确定发生了什么.我正在我的位置机器上运行Spark集群.所以整个火花集群都在一个主机下127.0.0.1,我在独立模式下运行
JavaPairRDD<byte[], Iterable<CassandraRow>> cassandraRowsRDD= javaFunctions(sc).cassandraTable("test", "hello" )
.select("rowkey", "col1", "col2", "col3", )
.spanBy(new Function<CassandraRow, byte[]>() {
@Override
public byte[] call(CassandraRow v1) {
return v1.getBytes("rowkey").array();
}
}, byte[].class);
Iterable<Tuple2<byte[], Iterable<CassandraRow>>> listOftuples = cassandraRowsRDD.collect(); //ERROR HAPPENS HERE
Tuple2<byte[], Iterable<CassandraRow>> tuple = listOftuples.iterator().next();
byte[] partitionKey = tuple._1();
for(CassandraRow cassandraRow: tuple._2()) {
System.out.println("************START************");
System.out.println(new String(partitionKey));
System.out.println("************END************");
}
Run Code Online (Sandbox Code Playgroud)
这个错误是最难追查的.它显然发生在cassandraRowsRDD.collect(),我不知道为什么?
16/10/09 23:36:21 ERROR Executor: Exception in task 2.3 in stage 0.0 (TID 21)
java.lang.ClassCastException: cannot assign instance of scala.collection.immutable.List$SerializationProxy to field org.apache.spark.rdd.RDD.org$apache$spark$rdd$RDD$$dependencies_ of type scala.collection.Seq in instance of org.apache.spark.rdd.MapPartitionsRDD
at java.io.ObjectStreamClass$FieldReflector.setObjFieldValues(ObjectStreamClass.java:2133)
at java.io.ObjectStreamClass.setObjFieldValues(ObjectStreamClass.java:1305)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2006)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1924)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1801)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2000)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1924)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1801)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:371)
at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:75)
at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:114)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:85)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Run Code Online (Sandbox Code Playgroud)
这是我使用的版本
Scala code runner version 2.11.8 // when I run scala -version or even ./spark-shell
compile group: 'org.apache.spark' name: 'spark-core_2.11' version: '2.0.0'
compile group: 'org.apache.spark' name: 'spark-streaming_2.11' version: '2.0.0'
compile group: 'org.apache.spark' name: 'spark-sql_2.11' version: '2.0.0'
compile group: 'com.datastax.spark' name: 'spark-cassandra-connector_2.11' version: '2.0.0-M3':
Run Code Online (Sandbox Code Playgroud)
我的gradle文件在介绍一些名为"提供"的内容后看起来像这样,实际上似乎并不存在,但谷歌说要创建一个,所以我的build.gradle看起来像这样
group 'com.company'
version '1.0-SNAPSHOT'
apply plugin: 'java'
apply plugin: 'idea'
repositories {
mavenCentral()
mavenLocal()
}
configurations {
provided
}
sourceSets {
main {
compileClasspath += configurations.provided
test.compileClasspath += configurations.provided
test.runtimeClasspath += configurations.provided
}
}
idea {
module {
scopes.PROVIDED.plus += [ configurations.provided ]
}
}
dependencies {
compile 'org.slf4j:slf4j-log4j12:1.7.12'
provided group: 'org.apache.spark', name: 'spark-core_2.11', version: '2.0.0'
provided group: 'org.apache.spark', name: 'spark-streaming_2.11', version: '2.0.0'
provided group: 'org.apache.spark', name: 'spark-sql_2.11', version: '2.0.0'
provided group: 'com.datastax.spark', name: 'spark-cassandra-connector_2.11', version: '2.0.0-M3'
}
jar {
from { configurations.provided.collect { it.isDirectory() ? it : zipTree(it) } }
// with jar
from sourceSets.test.output
manifest {
attributes 'Main-Class': "com.company.batchprocessing.Hello"
}
exclude 'META-INF/.RSA', 'META-INF/.SF', 'META-INF/*.DSA'
zip64 true
}
Run Code Online (Sandbox Code Playgroud)
Hol*_*ndl 11
我有同样的问题,可以通过将我的应用程序的jar添加到spark的classpath来解决它
spark = SparkSession.builder()
.appName("Foo")
.config("spark.jars", "target/scala-2.11/foo_2.11-0.1.jar")
Run Code Online (Sandbox Code Playgroud)
小智 5
我遇到了同样的异常,并深入研究了多个相关的 Jiras(9219、12675、18075)。
我认为异常名称令人困惑,真正的问题是spark集群和驱动程序应用程序之间的环境设置不一致。
例如,我使用以下行启动了我的 Spark 集群conf/spark-defaults.conf:
spark.master spark://master:7077
Run Code Online (Sandbox Code Playgroud)
当我spark-submit用一行启动我的驱动程序(甚至程序以 开头)时:
sparkSession.master("spark://<master ip>:7077")
Run Code Online (Sandbox Code Playgroud)
其中<master ip>是节点的正确 IP 地址master,但由于这种简单的不一致,程序将失败。
因此,我建议所有驱动程序应用程序都启动spark-submit并且不要复制驱动程序代码中的任何配置(除非您需要覆盖某些配置)。也就是说,只需spark-submit在运行的 Spark 集群中以相同的方式设置您的环境。
您的 call() 方法应该返回 byte[] ,如下所示。
@Override
public byte[] call(CassandraRow v1) {
return v1.getBytes("rowkey").array();
}
Run Code Online (Sandbox Code Playgroud)
如果您仍然遇到问题,请检查 Jira https://issues.apache.org/jira/browse/SPARK-9219中提到的依赖项版本
| 归档时间: |
|
| 查看次数: |
14674 次 |
| 最近记录: |