AWS S3:使用ResetException上传大文件失败:无法重置请求输入流

lol*_*ski 4 java scala amazon-s3 amazon-web-services aws-sdk

任何人都可以告诉我下面的代码有什么问题,大文件上传(> 10GB)总是失败ResetException: Failed to reset the request input stream吗?

失败总是在一段时间后发生(即大约15分钟后),这必然意味着上载过程只是在中间的某个地方失败.

这是我试图调试的问题:

  1. in.marksSupported() == false // checking if mark is supported on my FileInputStream

    我非常怀疑这是问题所在,因为S3 SDK似乎想在上传过程中的某个时刻执行重置操作,可能是因为连接丢失或者传输过程遇到了一些错误.

  2. 将我的内容包装FileInputStream在内BufferedInputStream以启用标记.现在调用in.marksSupported()返回true,意味着标记支持在那里.奇怪的是,上传过程仍然失败并出现同样的错误.

  3. 添加putRequest.getRequestClientOptions.setReadLimit(n),n=100000 (100kb), and 800000000 (800mb)但它仍然会引发相同的错误.我怀疑因为此参数用于重置流,如上所述,该流不受支持FileInputStream

有趣的是,我的AWS开发帐户上不会发生同样的问题.我认为这只是因为开发帐户没有作为我的生产帐户的重负荷,这意味着上载过程可以尽可能顺利地执行而不会出现任何故障.

请看下面的代码:

object S3TransferExample {
// in main class
def main(args: Array[String]): Unit = {
    ...
    val file = new File("/mnt/10gbfile.zip")
    val in = new FileInputStream(file)
    // val in = new BufferedInputStream(new FileInputStream(file)) --> tried wrapping file inputstream in a buffered input stream, but it didn't help..
    upload("mybucket", "mykey", in, file.length, "application/zip").waitForUploadResult
    ...
}

val awsCred = new BasicAWSCredentials("access_key", "secret_key")
val s3Client = new AmazonS3Client(awsCred)
val tx = new TransferManager(s3Client)

def upload(bucketName: String,  keyName: String,  inputStream: InputStream,  contentLength: Long,  contentType: String,  serverSideEncryption: Boolean = true,  storageClass: StorageClass = StorageClass.ReducedRedundancy ):Upload = {
  val metaData = new ObjectMetadata
  metaData.setContentType(contentType)
  metaData.setContentLength(contentLength)

  if(serverSideEncryption) {
    metaData.setSSEAlgorithm(ObjectMetadata.AES_256_SERVER_SIDE_ENCRYPTION)
  }

  val putRequest = new PutObjectRequest(bucketName, keyName, inputStream, metaData)
  putRequest.setStorageClass(storageClass)
  putRequest.getRequestClientOptions.setReadLimit(100000)

  tx.upload(putRequest)

}
}
Run Code Online (Sandbox Code Playgroud)

这是完整的堆栈跟踪:

Unable to execute HTTP request: mybucket.s3.amazonaws.com failed to respond
org.apache.http.NoHttpResponseException: mybuckets3.amazonaws.com failed to respond
    at org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:143) ~[httpclient-4.3.4.jar:4.3.4]
    at org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:57) ~[httpclient-4.3.4.jar:4.3.4]
    at org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:260) ~[httpcore-4.3.2.jar:4.3.2]
    at org.apache.http.impl.AbstractHttpClientConnection.receiveResponseHeader(AbstractHttpClientConnection.java:283) ~[httpcore-4.3.2.jar:4.3.2]
    at org.apache.http.impl.conn.DefaultClientConnection.receiveResponseHeader(DefaultClientConnection.java:251) ~[httpclient-4.3.4.jar:4.3.4]
    at org.apache.http.impl.conn.ManagedClientConnectionImpl.receiveResponseHeader(ManagedClientConnectionImpl.java:197) ~[httpclient-4.3.4.jar:4.3.4]
    at org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:271) ~[httpcore-4.3.2.jar:4.3.2]
    at com.amazonaws.http.protocol.SdkHttpRequestExecutor.doReceiveResponse(SdkHttpRequestExecutor.java:66) ~[aws-java-sdk-core-1.9.13.jar:na]
    at org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:123) ~[httpcore-4.3.2.jar:4.3.2]
    at org.apache.http.impl.client.DefaultRequestDirector.tryExecute(DefaultRequestDirector.java:685) ~[httpclient-4.3.4.jar:4.3.4]
    at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:487) ~[httpclient-4.3.4.jar:4.3.4]
    at org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:863) ~[httpclient-4.3.4.jar:4.3.4]
    at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82) ~[httpclient-4.3.4.jar:4.3.4]
    at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:57) ~[httpclient-4.3.4.jar:4.3.4]
    at com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:685) [aws-java-sdk-core-1.9.13.jar:na]
    at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:460) [aws-java-sdk-core-1.9.13.jar:na]
    at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:295) [aws-java-sdk-core-1.9.13.jar:na]
    at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3710) [aws-java-sdk-s3-1.9.13.jar:na]
    at com.amazonaws.services.s3.AmazonS3Client.doUploadPart(AmazonS3Client.java:2799) [aws-java-sdk-s3-1.9.13.jar:na]
    at com.amazonaws.services.s3.AmazonS3Client.uploadPart(AmazonS3Client.java:2784) [aws-java-sdk-s3-1.9.13.jar:na]
    at com.amazonaws.services.s3.transfer.internal.UploadCallable.uploadPartsInSeries(UploadCallable.java:259) [aws-java-sdk-s3-1.9.13.jar:na]
    at com.amazonaws.services.s3.transfer.internal.UploadCallable.uploadInParts(UploadCallable.java:193) [aws-java-sdk-s3-1.9.13.jar:na]
    at com.amazonaws.services.s3.transfer.internal.UploadCallable.call(UploadCallable.java:125) [aws-java-sdk-s3-1.9.13.jar:na]
    at com.amazonaws.services.s3.transfer.internal.UploadMonitor.call(UploadMonitor.java:129) [aws-java-sdk-s3-1.9.13.jar:na]
    at com.amazonaws.services.s3.transfer.internal.UploadMonitor.call(UploadMonitor.java:50) [aws-java-sdk-s3-1.9.13.jar:na]
    at java.util.concurrent.FutureTask.run(FutureTask.java:266) [na:1.8.0_40]
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_40]
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_40]
    at java.lang.Thread.run(Thread.java:745) [na:1.8.0_40]
com.amazonaws.ResetException: Failed to reset the request input stream;  If the request involves an input stream, the maximum stream buffer size can be configured via request.getRequestClientOptions().setReadLimit(int)
  at com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:636)
  at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:460)
  at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:295)
  at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3710)
  at com.amazonaws.services.s3.AmazonS3Client.doUploadPart(AmazonS3Client.java:2799)
  at com.amazonaws.services.s3.AmazonS3Client.uploadPart(AmazonS3Client.java:2784)
  at com.amazonaws.services.s3.transfer.internal.UploadCallable.uploadPartsInSeries(UploadCallable.java:259)
  at com.amazonaws.services.s3.transfer.internal.UploadCallable.uploadInParts(UploadCallable.java:193)
  at com.amazonaws.services.s3.transfer.internal.UploadCallable.call(UploadCallable.java:125)
  at com.amazonaws.services.s3.transfer.internal.UploadMonitor.call(UploadMonitor.java:129)
  at com.amazonaws.services.s3.transfer.internal.UploadMonitor.call(UploadMonitor.java:50)
  at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
  at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
  at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: Resetting to invalid mark
  at java.io.BufferedInputStream.reset(BufferedInputStream.java:448)
  at com.amazonaws.internal.SdkBufferedInputStream.reset(SdkBufferedInputStream.java:106)
  at com.amazonaws.internal.SdkFilterInputStream.reset(SdkFilterInputStream.java:103)
  at com.amazonaws.event.ProgressInputStream.reset(ProgressInputStream.java:139)
  at com.amazonaws.internal.SdkFilterInputStream.reset(SdkFilterInputStream.java:103)
  at com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:634) 
Run Code Online (Sandbox Code Playgroud)

lol*_*ski 5

这肯定看起来像一个bug,我已经报道了.解决方案是使用接受a File而不是的其他构造函数InputStream

def upload(bucketName: String,  keyName: String,  file: File,  contentLength: Long,  contentType: String,  serverSideEncryption: Boolean = true,  storageClass: StorageClass = StorageClass.ReducedRedundancy ):Upload = {
  val metaData = new ObjectMetadata
  metaData.setContentType(contentType)
  metaData.setContentLength(contentLength)

  if(serverSideEncryption) {
    metaData.setSSEAlgorithm(ObjectMetadata.AES_256_SERVER_SIDE_ENCRYPTION)
  }

  val putRequest = new PutObjectRequest(bucketName, keyName, file)
  putRequest.setStorageClass(storageClass)
  putRequest.getRequestClientOptions.setReadLimit(100000)
  putRequest.setMetadata(metaData)
  tx.upload(putRequest)

}
}
Run Code Online (Sandbox Code Playgroud)


ose*_*003 5

我研究过这个问题,这是一个很长的故事。

结论是:通过在java命令行中插入以下选项将系统属性传递给java

-Dcom.amazonaws.sdk.s3.defaultStreamBufferSize=YOUR_MAX_PUT_SIZE

请参阅https://github.com/aws/aws-sdk-java/blob/master/aws-java-sdk-s3/src/main/java/com/amazonaws/services/s3/AmazonS3Client.java#L1668

这告诉 AmazonS3Client 设置适当的不可展开缓冲区的最大大小,该缓冲区将用于重新读取以进行重试。


Mic*_*bot 1

S3 不支持PUT那么大的请求。

单个 PUT 中可以上传的最大对象为 5 GB。

http://aws.amazon.com/s3/faqs

除此之外,您还必须使用分段上传 API,该 API 允许每个部分为 5GB,最大对象大小为 5TB。对于小于 5GB 的文件,您也可以使用 multipart,因为 multipart 支持并行上传各个部分。

  • 我的上传是使用 TransferManager 完成的,这意味着 SDK API 自动处理了分段上传。我们可以参考分段上传示例文档,它与我已有的类似:docs.aws.amazon.com/AmazonS3/latest/dev/HLuploadFileJava.html (4认同)