小编ban*_*man的帖子

Pyspark s3 错误:java.lang.NoClassDefFoundError:com/amazonaws/AmazonServiceException

我想我遇到了 jar 不兼容的问题。我使用以下 jar 文件来构建 Spark 集群:

  1. Spark-2.4.7-bin-hadoop2.7.tgz
  2. aws-java-sdk-1.11.885.jar
  3. hadoop:hadoop-aws-2.7.4.jar
from pyspark.sql import SparkSession, SQLContext
from pyspark.sql.types import *
from pyspark.sql.functions import *
import sys

spark = (SparkSession.builder
         .appName("AuthorsAges")
         .appName('SparkCassandraApp')
         .getOrCreate())


spark._jsc.hadoopConfiguration().set("fs.s3a.access.key", "access-key")
spark._jsc.hadoopConfiguration().set("fs.s3a.secret.key", "secret-key")
spark._jsc.hadoopConfiguration().set("fs.s3a.impl","org.apache.hadoop.fs.s3a.S3AFileSystem")
spark._jsc.hadoopConfiguration().set("com.amazonaws.services.s3.enableV4", "true")
spark._jsc.hadoopConfiguration().set("fs.s3a.aws.credentials.provider","org.apache.hadoop.fs.s3a.BasicAWSCredentialsProvider")
spark._jsc.hadoopConfiguration().set("fs.s3a.endpoint", "")


input_file='s3a://spark-test-data/Fire_Department_Calls_for_Service.csv'

file_schema = StructType([StructField("Call_Number",StringType(),True),
        StructField("Unit_ID",StringType(),True),
        StructField("Incident_Number",StringType(),True),
...
...
# Read file into a Spark DataFrame
input_df = (spark.read.format("csv") \
            .option("header", "true") \
            .schema(file_schema) \
            .load(input_file))

Run Code Online (Sandbox Code Playgroud)

当代码开始执行spark.read.format时失败。似乎找不到该类。java.lang.NoClassDefFoundError:com/amazonaws/AmazonServiceException。

我的spark-defaults.conf配置如下:

spark.jars.packages                com.amazonaws:aws-java-sdk:1.11.885,org.apache.hadoop:hadoop-aws:2.7.4
Run Code Online (Sandbox Code Playgroud)

如果有人能帮助我,我将不胜感激。有任何想法吗?

Traceback (most recent call last):
  File "<stdin>", line 5, …
Run Code Online (Sandbox Code Playgroud)

amazon-s3 amazon-web-services apache-spark pyspark

6
推荐指数
1
解决办法
5598
查看次数

Pyspark S3 错误:java.lang.NoClassDefFoundError:com/amazonaws/services/s3/model/MultiObjectDeleteException

未能成功设置可以读取 AWS s3 文件的 Spark 集群。我使用的软件如下:

  1. hadoop-aws-3.2.0.jar
  2. aws-java-sdk-1.11.887.jar
  3. Spark-3.0.1-bin-hadoop3.2.tgz

使用Python版本:Python 3.8.6

from pyspark.sql import SparkSession, SQLContext
from pyspark.sql.types import *
from pyspark.sql.functions import *
import sys

spark = (SparkSession.builder
         .appName("AuthorsAges")
         .appName('SparkCassandraApp')
         .getOrCreate())


spark._jsc.hadoopConfiguration().set("fs.s3a.access.key", "access-key")
spark._jsc.hadoopConfiguration().set("fs.s3a.secret.key", "secret-key")
spark._jsc.hadoopConfiguration().set("fs.s3a.impl","org.apache.hadoop.fs.s3a.S3AFileSystem")
spark._jsc.hadoopConfiguration().set("com.amazonaws.services.s3.enableV4", "true")
spark._jsc.hadoopConfiguration().set("fs.s3a.aws.credentials.provider","org.apache.hadoop.fs.s3a.BasicAWSCredentialsProvider")
spark._jsc.hadoopConfiguration().set("fs.s3a.endpoint", "")


input_file='s3a://spark-test-data/Fire_Department_Calls_for_Service.csv'

file_schema = StructType([StructField("Call_Number",StringType(),True),
        StructField("Unit_ID",StringType(),True),
        StructField("Incident_Number",StringType(),True),
...
...
# Read file into a Spark DataFrame
input_df = (spark.read.format("csv") \
            .option("header", "true") \
            .schema(file_schema) \
            .load(input_file))
Run Code Online (Sandbox Code Playgroud)

当代码开始执行spark.read.format时失败。似乎找不到该类。java.lang.NoClassDefFoundError:com.amazonaws.services.s3.model.MultiObjectDeleteException

  File "<stdin>", line 1, in <module>
  File "/usr/local/spark/spark-3.0.1-bin-hadoop3.2/python/pyspark/sql/readwriter.py", line 178, in load …
Run Code Online (Sandbox Code Playgroud)

python amazon-s3 apache-spark

5
推荐指数
1
解决办法
7028
查看次数