mit*_*hra 11 java maven cassandra-2.0 apache-spark
我正在使用spark 1.3.1 prebuild version spark-1.3.1-bin-hadoop2.6.tgz
线程"main"中的异常java.lang.NoSuchMethodError:scala.Predef $.$ conforms()Lscala/Predef $$ less $ colon $ less; org.apache.spark.util.Utils $ .getSystemProperties(Utils.scala:1418)org.apache.spark.SparkConf.(SparkConf.scala:58)org.apache.spark.SparkConf.(SparkConf.scala: 52)在com.zoho.zbi.Testing.test(Testing.java:43)com.zoho.zbi.Testing.main(Testing.java:39)使用Spark的默认log4j配置文件:org/apache/spark/log4j- defaults.properties
我正在尝试一个简单的演示应用程序来保存到cassandra
SparkConf batchConf= new SparkConf()
.setSparkHome(sparkHome)
.setJars(jars)
.setAppName(ZohoBIConstants.getAppName("cassandra"))//NO I18N
.setMaster(master).set("spark.cassandra.connection.host", "localhost");
JavaSparkContext sc = new JavaSparkContext(batchConf);
// here we are going to save some data to Cassandra...
List<Person> people = Arrays.asList(
Person.newInstance(1, "John", new Date()),
Person.newInstance(2, "Anna", new Date()),
Person.newInstance(3, "Andrew", new Date())
);
// Person test = Person.newInstance(1, "vini", new Date())''
System.out.println("Inside Java API Demo : "+people);
JavaRDD<Person> rdd = sc.parallelize(people);
System.out.println("Inside Java API Demo rdd : "+rdd);
javaFunctions(rdd).writerBuilder("test", "people", mapToRow(Person.class)).saveToCassandra();
System.out.println("Stopping sc");
sc.stop();
Run Code Online (Sandbox Code Playgroud)
当我提交使用spark提交其工作
bin/spark-submit --class "abc.efg.Testing" --master spark://xyz:7077 /home/test/target/uber-Cassandra-0.0.1-SNAPSHOT.jar
Run Code Online (Sandbox Code Playgroud)
这是我的pom
依赖
<dependencies>
<!-- Scala -->
<dependency>
<groupId>org.scala-lang</groupId>
<artifactId>scala-library</artifactId>
<version>${scala.version}</version>
<scope>compile</scope>
</dependency>
<dependency>
<groupId>org.scala-lang</groupId>
<artifactId>scala-compiler</artifactId>
<version>${scala.version}</version>
</dependency>
<!-- END Scala -->
<dependency>
<groupId>com.google.guava</groupId>
<artifactId>guava</artifactId>
<version>18.0</version>
</dependency>
<dependency>
<groupId>com.yammer.metrics</groupId>
<artifactId>metrics-core</artifactId>
<version>2.2.0</version>
</dependency>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>3.8.1</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>javax.servlet</groupId>
<artifactId>javax.servlet-api</artifactId>
<version>3.1.0</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>com.datastax.cassandra</groupId>
<artifactId>cassandra-driver-core</artifactId>
<version>2.1.5</version>
</dependency>
<dependency>
<groupId>org.json</groupId>
<artifactId>json</artifactId>
<version>20090211</version>
</dependency>
<!-- Cassandra Spark Connector dependency -->
<dependency>
<groupId>com.datastax.spark</groupId>
<artifactId>spark-cassandra-connector_2.10</artifactId>
<version>1.2.0</version>
</dependency>
<!-- Cassandra java Connector dependency -->
<dependency>
<groupId>com.datastax.spark</groupId>
<artifactId>spark-cassandra-connector-java_2.10</artifactId>
<version>1.2.0</version>
</dependency>
<!-- Spark Core dependency -->
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.11</artifactId>
<version>1.3.1</version>
</dependency>
<!-- Spark dependency -->
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming_2.11</artifactId>
<version>1.3.1</version>
</dependency>
<!-- Spark dependency -->
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming-kafka_2.10</artifactId>
<version>1.3.1</version>
</dependency>
</dependencies>
Run Code Online (Sandbox Code Playgroud)
我建立使用
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>2.3.2</version>
<configuration>
<source>1.7</source>
<target>1.7</target>
</configuration>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-shade-plugin</artifactId>
<version>2.3</version>
<executions>
<execution>
<phase>package</phase>
<goals>
<goal>shade</goal>
</goals>
</execution>
</executions>
<configuration>
<filters>
<filter>
<artifact>*:*</artifact>
<excludes>
<exclude>META-INF/*.SF</exclude>
<exclude>META-INF/*.DSA</exclude>
<exclude>META-INF/*.RSA</exclude>
</excludes>
</filter>
</filters>
<finalName>uber-${project.artifactId}-${project.version}</finalName>
</configuration>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>2.3.2</version>
<configuration>
<source>1.7</source>
<target>1.7</target>
</configuration>
</plugin>
</plugins>
</build>
Run Code Online (Sandbox Code Playgroud)
但是当我通过代码提交它不起作用时,任何帮助都很有用..我尝试在pom中添加scala2.10.4道具仍然没有运气
我在eclipse中运行作为应用程序运行所有主,spark home和jar设置为sparkConf错误显示在sparkConf中
我的scala版本是
scala -version Scala代码运行器版本2.11.2 - 版权所有2002-2013,LAMP/EPFL
这与这个问题有什么关系吗?
如何切换到旧版本的scala?在文档中说spark1.3.1支持scala 2.10.x版本,请让我知道如何解决这个问题
Mak*_*sud 19
您遇到的问题是由于Scala版本中的不兼容性.使用较旧的Scala 2.10编译Prebuild Spark 1.3.1发行版,因为2.11不支持某些Spark依赖项,包括JDBC支持.
我建议用Scala 2.10运行你的Spark集群.但是,如果您需要,还可以通过以下方式使用Scala 2.11编译Spark包:
dev/change-version-to-2.11.sh
mvn -Pyarn -Phadoop-2.4 -Dscala-2.11 -DskipTests clean package
Run Code Online (Sandbox Code Playgroud)
| 归档时间: |
|
| 查看次数: |
25998 次 |
| 最近记录: |