在google dataproc集群实例中的spark-submit上运行app jar文件

Sau*_*cci 4 java jar apache-spark google-cloud-dataproc

我正在运行一个.jar文件,其中包含我需要打包的所有依赖项.其中一个依赖项com.google.common.util.concurrent.RateLimiter已被检查,它的类文件位于此.jar文件中.

不幸的是,当我在google的dataproc-cluster实例的主节点上点击命令spark-submit时,我收到此错误:

Exception in thread "main" java.lang.NoSuchMethodError: com.google.common.base.Stopwatch.createStarted()Lcom/google/common/base/Stopwatch;
at com.google.common.util.concurrent.RateLimiter$SleepingStopwatch$1.<init>(RateLimiter.java:417)
at com.google.common.util.concurrent.RateLimiter$SleepingStopwatch.createFromSystemTimer(RateLimiter.java:416)
at com.google.common.util.concurrent.RateLimiter.create(RateLimiter.java:130)
at LabeledAddressDatasetBuilder.publishLabeledAddressesFromBlockstem(LabeledAddressDatasetBuilder.java:60)
at LabeledAddressDatasetBuilder.main(LabeledAddressDatasetBuilder.java:144)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:672)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:120)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Run Code Online (Sandbox Code Playgroud)

在覆盖我的依赖关系的意义上似乎发生了一些事情.已经Stopwatch.class从这个.jar 反编译该文件并检查该方法是否存在.这恰好发生在我运行google dataproc实例时.我grep在执行的过程中做了spark-submit,我得到了-cp这样的标志:

/usr/lib/jvm/java-8-openjdk-amd64/bin/java -cp /usr/lib/spark/conf/:/usr/lib/spark/lib/spark-assembly-1.5.0-hadoop2.7.1.jar:/usr/lib/spark/lib/datanucleus-api-jdo-3.2.6.jar:/usr/lib/spark/lib/datanucleus-rdbms-3.2.9.jar:/usr/lib/spark/lib/datanucleus-core-3.2.10.jar:/etc/hadoop/conf/:/etc/hadoop/conf/:/usr/lib/hadoop/lib/native/:/usr/lib/hadoop/lib/*:/usr/lib/hadoop/*:/usr/lib/hadoop-hdfs/lib/*:/usr/lib/hadoop-hdfs/*:/usr/lib/hadoop-mapreduce/lib/*:/usr/lib/hadoop-mapreduce/*:/usr/lib/hadoop-yarn/lib/*:/usr/lib/hadoop-yarn/*
Run Code Online (Sandbox Code Playgroud)

有什么办法可以解决这个问题吗?

谢谢.

Ang*_*vis 7

正如您所发现的,Dataproc在调用Spark时会在类路径中包含Hadoop依赖项.这主要是为了使用Hadoop输入格式,文件系统等非常简单.缺点是你最终会得到Hadoop的番石榴版本,即11.02(见HADOOP-10101).

如何解决这个问题取决于您的构建系统.如果使用Maven,可以使用maven-shade插件在新的包名下重新定位您的番石榴版本.在GCS Hadoop Connector的包装中可以看到这方面的一个例子,但它的关键是你的pom.xml构建部分中的以下插件声明:

  <plugin>
    <groupId>org.apache.maven.plugins</groupId>
    <artifactId>maven-shade-plugin</artifactId>
    <version>2.3</version>
    <executions>
      <execution>
        <phase>package</phase>
        <goals>
          <goal>shade</goal>
        </goals>
        <configuration>
          <relocations>
            <relocation>
              <pattern>com.google.common</pattern>
              <shadedPattern>your.repackaged.deps.com.google.common</shadedPattern>
            </relocation>
          </relocations>
        </execution>
      </execution>
    </plugin>
Run Code Online (Sandbox Code Playgroud)

类似的重定位可以使用sbt的sbt-assembly插件,ant的jarjar,以及jar的jarjar或shadow.