oge*_*gen 9 scala amazon-s3 amazon-web-services apache-spark aws-sdk
我想在我的本地开发机器上运行一个简单的Spark作业(通过Intellij),从Amazon s3读取数据。
我的build.sbt文件:
scalaVersion := "2.11.12"
libraryDependencies ++= Seq(
"org.apache.spark" %% "spark-core" % "2.3.1",
"org.apache.spark" %% "spark-sql" % "2.3.1",
"com.amazonaws" % "aws-java-sdk" % "1.11.407",
"org.apache.hadoop" % "hadoop-aws" % "3.1.1"
)
Run Code Online (Sandbox Code Playgroud)
我的代码段:
val spark = SparkSession
.builder
.appName("test")
.master("local[2]")
.getOrCreate()
spark
.sparkContext
.hadoopConfiguration
.set("fs.s3n.impl","org.apache.hadoop.fs.s3native.NativeS3FileSystem")
val schema_p = ...
val df = spark
.read
.schema(schema_p)
.parquet("s3a:///...")
Run Code Online (Sandbox Code Playgroud)
我得到以下异常:
Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/hadoop/fs/StreamCapabilities
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(ClassLoader.java:763)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:467)
at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
at java.net.URLClassLoader$1.run(URLClassLoader.java:368)
at java.net.URLClassLoader$1.run(URLClassLoader.java:362)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:361)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
at org.apache.hadoop.conf.Configuration.getClassByNameOrNull(Configuration.java:2093)
at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2058)
at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2152)
at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2580)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2593)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:91)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2632)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2614)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
at org.apache.spark.sql.execution.streaming.FileStreamSink$.hasMetadata(FileStreamSink.scala:45)
at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:354)
at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:239)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:227)
at org.apache.spark.sql.DataFrameReader.parquet(DataFrameReader.scala:622)
at org.apache.spark.sql.DataFrameReader.parquet(DataFrameReader.scala:606)
at Test$.delayedEndpoint$Test$1(Test.scala:27)
at Test$delayedInit$body.apply(Test.scala:4)
at scala.Function0$class.apply$mcV$sp(Function0.scala:34)
at scala.runtime.AbstractFunction0.apply$mcV$sp(AbstractFunction0.scala:12)
at scala.App$$anonfun$main$1.apply(App.scala:76)
at scala.App$$anonfun$main$1.apply(App.scala:76)
at scala.collection.immutable.List.foreach(List.scala:392)
at scala.collection.generic.TraversableForwarder$class.foreach(TraversableForwarder.scala:35)
at scala.App$class.main(App.scala:76)
at Test$.main(Test.scala:4)
at Test.main(Test.scala)
Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.fs.StreamCapabilities
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 41 more
Run Code Online (Sandbox Code Playgroud)
替换为时s3a:///出现s3:///另一个错误:No FileSystem for scheme: s3
由于我是新来的AWS,我不知道我是否应该用户s3:///,s3a:///或s3n:///。我已经使用aws-cli设置了我的AWS凭证。
我的机器上没有任何Spark安装。
在此先感谢您的帮助
Ste*_*ran 15
我将从查看S3A故障排除文档开始
不要尝试“插入”比使用任何问题构建的Hadoop版本都更高的AWS开发工具包新版本,更改AWS开发工具包版本将无法解决问题,仅会更改您看到的堆栈跟踪。
不管你有你的地方火花安装hadoop- JAR文件的版本,你需要有准确的版本相同的hadoop-aws,并具有完全相同的版本,它的Hadoop-AWS与内置的AWS SDK的。有关详细信息,请尝试使用mvnrepository。
对我来说,除了上述之外,它通过在 pom.xml 中添加以下依赖项来解决:
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-common</artifactId>
<version>3.1.1</version>
</dependency>
Run Code Online (Sandbox Code Playgroud)
| 归档时间: |
|
| 查看次数: |
6101 次 |
| 最近记录: |