Pau*_*ang 10 amazon-s3 amazon-web-services apache-spark
我SignatureDoesNotMatch在尝试使用Spark将Dataframe写入S3 时遇到了S3 .
症状/事情尝试过:
AWS_SECRETY_KEY不包含任何非字母数字,如此处所示 ;m3.xlarge与spark-2.0.2-bin-hadoop2.7在本地模式下运行;代码可以归结为:
spark-submit\
--verbose\
--conf spark.hadoop.fs.s3n.impl=org.apache.hadoop.fs.s3native.NativeS3FileSystem \
--conf spark.hadoop.fs.s3.impl=org.apache.hadoop.fs.s3.S3FileSystem \
--conf spark.hadoop.fs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem\
--packages org.apache.hadoop:hadoop-aws:2.7.3\
--driver-java-options '-Dcom.amazonaws.services.s3.enableV4'\
foobar.py
# foobar.py
sc = SparkContext.getOrCreate()
sc._jsc.hadoopConfiguration().set("fs.s3a.access.key", 'xxx')
sc._jsc.hadoopConfiguration().set("fs.s3a.secret.key", 'xxx')
sc._jsc.hadoopConfiguration().set("fs.s3a.endpoint", 's3.dualstack.ap-southeast-2.amazonaws.com')
hc = SparkSession.builder.enableHiveSupport().getOrCreate()
dataframe = hc.read.parquet(in_file_path)
dataframe.write.csv(
path=out_file_path,
mode='overwrite',
compression='gzip',
sep=',',
quote='"',
escape='\\',
escapeQuotes='true',
)
Run Code Online (Sandbox Code Playgroud)
Spark溢出以下错误.
将log4j设置为verbose,看起来发生了以下情况:
/_temporary/foorbar.part-xxx; >> PUT XXX/part-r-00025-ae3d5235-932f-4b7d-ae55-b159d1c1343d.gz.parquet HTTP/1.1
>> Host: XXX.s3-ap-southeast-2.amazonaws.com
>> x-amz-content-sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
>> X-Amz-Date: 20161104T005749Z
>> x-amz-metadata-directive: REPLACE
>> Connection: close
>> User-Agent: aws-sdk-java/1.10.11 Linux/3.13.0-100-generic OpenJDK_64-Bit_Server_VM/25.91-b14/1.8.0_91 com.amazonaws.services.s3.transfer.TransferManager/1.10.11
>> x-amz-server-side-encryption-aws-kms-key-id: 5f88a222-715c-4a46-a64c-9323d2d9418c
>> x-amz-server-side-encryption: aws:kms
>> x-amz-copy-source: /XXX/_temporary/0/task_201611040057_0001_m_000025/part-r-00025-ae3d5235-932f-4b7d-ae55-b159d1c1343d.gz.parquet
>> Accept-Ranges: bytes
>> Authorization: AWS4-HMAC-SHA256 Credential=AKIAJZCSOJPB5VX2B6NA/20161104/ap-southeast-2/s3/aws4_request, SignedHeaders=accept-ranges;connection;content-length;content-type;etag;host;last-modified;user-agent;x-amz-content-sha256;x-amz-copy-source;x-amz-date;x-amz-metadata-directive;x-amz-server-side-encryption;x-amz-server-side-encryption-aws-kms-key-id, Signature=48e5fe2f9e771dc07a9c98c7fd98972a99b53bfad3b653151f2fcba67cff2f8d
>> ETag: 31436915380783143f00299ca6c09253
>> Content-Type: application/octet-stream
>> Content-Length: 0
DEBUG wire: << "HTTP/1.1 403 Forbidden[\r][\n]"
DEBUG wire: << "x-amz-request-id: 849F990DDC1F3684[\r][\n]"
DEBUG wire: << "x-amz-id-2: 6y16TuQeV7CDrXs5s7eHwhrpa1Ymf5zX3IrSuogAqz9N+UN2XdYGL2FCmveqKM2jpGiaek5rUkM=[\r][\n]"
DEBUG wire: << "Content-Type: application/xml[\r][\n]"
DEBUG wire: << "Transfer-Encoding: chunked[\r][\n]"
DEBUG wire: << "Date: Fri, 04 Nov 2016 00:57:48 GMT[\r][\n]"
DEBUG wire: << "Server: AmazonS3[\r][\n]"
DEBUG wire: << "Connection: close[\r][\n]"
DEBUG wire: << "[\r][\n]"
DEBUG DefaultClientConnection: Receiving response: HTTP/1.1 403 Forbidden
<< HTTP/1.1 403 Forbidden
<< x-amz-request-id: 849F990DDC1F3684
<< x-amz-id-2: 6y16TuQeV7CDrXs5s7eHwhrpa1Ymf5zX3IrSuogAqz9N+UN2XdYGL2FCmveqKM2jpGiaek5rUkM=
<< Content-Type: application/xml
<< Transfer-Encoding: chunked
<< Date: Fri, 04 Nov 2016 00:57:48 GMT
<< Server: AmazonS3
<< Connection: close
DEBUG requestId: x-amzn-RequestId: not available
Run Code Online (Sandbox Code Playgroud)
我遇到了完全相同的问题,并在本文的帮助下找到了解决方案(其他资源也指向同一方向)。设置这些配置选项后,成功写入S3:
spark.hadoop.mapreduce.fileoutputcommitter.algorithm.version 2
spark.speculation false
Run Code Online (Sandbox Code Playgroud)
我正在将Spark 2.1.1与Hadoop 2.7一起使用。我最后的spark-submit命令如下所示:
spark-submit
--packages com.amazonaws:aws-java-sdk:1.7.4,org.apache.hadoop:hadoop-aws:2.7.3
--conf spark.hadoop.fs.s3a.endpoint=s3.eu-central-1.amazonaws.com
--conf spark.hadoop.fs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem
--conf spark.executor.extraJavaOptions=-Dcom.amazonaws.services.s3.enableV4=true
--conf spark.driver.extraJavaOptions=-Dcom.amazonaws.services.s3.enableV4=true
--conf spark.hadoop.mapreduce.fileoutputcommitter.algorithm.version=2
--conf spark.speculation=false
...
Run Code Online (Sandbox Code Playgroud)
此外,我定义了以下环境变量:
AWS_ACCESS_KEY_ID=****
AWS_SECRET_ACCESS_KEY=****
Run Code Online (Sandbox Code Playgroud)
无论驱动程序如何,Spark 在 S3 上都会遇到的一个问题是,它是一种最终一致的对象存储,其中:重命名需要 O(字节)才能完成,并且 PUT 和 LIST 之间的延迟一致性可能会破坏提交。更简洁地说:Spark 假设在向文件系统写入内容后,如果对父目录执行 ls 操作,就会找到刚刚写入的内容。S3 不提供这一点,因此出现了术语“最终一致性”。现在,在 HADOOP-13786 中,我们正在努力做得更好,而 HADOOP-13345 则看看我们是否不能使用 Amazon Dynamo 来获得更快、一致的世界视图。但您必须为该功能支付 dynamodb 溢价。
最后,目前已知的有关 s3a 故障排除的所有信息(包括 403 错误的可能原因)均已在线提供。希望它能有所帮助,如果您发现其他原因,欢迎提供补丁
| 归档时间: |
|
| 查看次数: |
2727 次 |
| 最近记录: |