将5G文件上载到Amazon S3时出现EntityTooLarge错误

Dan*_*ler 10 amazon-s3 jets3t apache-spark parquet apache-spark-sql

根据此公告,Amazon S3文件大小限制应为5T ,但上传5G文件时出现以下错误

'/mahler%2Fparquet%2Fpageview%2Fall-2014-2000%2F_temporary%2F_attempt_201410112050_0009_r_000221_2222%2Fpart-r-222.parquet' XML Error Message: 
  <?xml version="1.0" encoding="UTF-8"?>
  <Error>
    <Code>EntityTooLarge</Code>
    <Message>Your proposed upload exceeds the maximum allowed size</Message>
    <ProposedSize>5374138340</ProposedSize>
    ...
    <MaxSizeAllowed>5368709120</MaxSizeAllowed>
  </Error>
Run Code Online (Sandbox Code Playgroud)

这使得S3似乎只接受5G上传.我正在使用Apache Spark SQL使用SchemRDD.saveAsParquetFile方法写出Parquet数据集.完整的堆栈跟踪是

org.apache.hadoop.fs.s3.S3Exception: org.jets3t.service.S3ServiceException: S3 PUT failed for '/mahler%2Fparquet%2Fpageview%2Fall-2014-2000%2F_temporary%2F_attempt_201410112050_0009_r_000221_2222%2Fpart-r-222.parquet' XML Error Message: <?xml version="1.0" encoding="UTF-8"?><Error><Code>EntityTooLarge</Code><Message>Your proposed upload exceeds the maximum allowed size</Message><ProposedSize>5374138340</ProposedSize><RequestId>20A38B479FFED879</RequestId><HostId>KxeGsPreQ0hO7mm7DTcGLiN7vi7nqT3Z6p2Nbx1aLULSEzp6X5Iu8Kj6qM7Whm56ciJ7uDEeNn4=</HostId><MaxSizeAllowed>5368709120</MaxSizeAllowed></Error>
        org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.storeFile(Jets3tNativeFileSystemStore.java:82)
        sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        java.lang.reflect.Method.invoke(Method.java:606)
        org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
        org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
        org.apache.hadoop.fs.s3native.$Proxy10.storeFile(Unknown Source)
        org.apache.hadoop.fs.s3native.NativeS3FileSystem$NativeS3FsOutputStream.close(NativeS3FileSystem.java:174)
        org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:61)
        org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:86)
        parquet.hadoop.ParquetFileWriter.end(ParquetFileWriter.java:321)
        parquet.hadoop.InternalParquetRecordWriter.close(InternalParquetRecordWriter.java:111)
        parquet.hadoop.ParquetRecordWriter.close(ParquetRecordWriter.java:73)
        org.apache.spark.sql.parquet.InsertIntoParquetTable.org$apache$spark$sql$parquet$InsertIntoParquetTable$$writeShard$1(ParquetTableOperations.scala:305)
        org.apache.spark.sql.parquet.InsertIntoParquetTable$$anonfun$saveAsHadoopFile$1.apply(ParquetTableOperations.scala:318)
        org.apache.spark.sql.parquet.InsertIntoParquetTable$$anonfun$saveAsHadoopFile$1.apply(ParquetTableOperations.scala:318)
        org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:62)
        org.apache.spark.scheduler.Task.run(Task.scala:54)
        org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:177)
        java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        java.lang.Thread.run(Thread.java:745)
Run Code Online (Sandbox Code Playgroud)

上传限制是否还是5T?如果这是我收到此错误的原因,我该如何解决?

Mic*_*bot 17

对象大小被限制为5 TB.在上传大小仍然是5 GB,如手册中解释说:

根据您上传的数据大小,Amazon S3提供以下选项:

  • 在单个操作中上传对象 - 只需一次PUT操作,您就可以上传最大5 GB的对象.

  • 部分上传对象 - 使用Multipart上传API,您可以上传大型对象,最高可达5 TB.

http://docs.aws.amazon.com/AmazonS3/latest/dev/UploadingObjects.html

一旦你进行分段上传,S3验证并重新组合这些部分,然后在S3中有一个对象,最大可达5TB,可以作为单个权利下载,只需一个HTTP GET请求......但是上传是甚至在小于5GB的文件上也可能更快,因为您可以并行上传这些部分,甚至可以重新尝试在第一次尝试时未成功的任何部分的上传.


Tom*_*der 11

如果您使用 aws cli 进行上传,则可以使用“aws s3 cp”命令,这样就不需要拆分和分段上传

aws s3 cp masive-file.ova s3://<your-bucket>/<prefix>/masive-file.ova
Run Code Online (Sandbox Code Playgroud)