如何将文件上传到大于5 MB(大约)的Amazon S3(官方SDK)?

Inv*_*ion 36 .net amazon-s3

我使用最新版本的官方Amazon S3 SDK(1.0.14.1)来创建备份工具.到目前为止,如果我上传的文件大小低于5 MB,一切正常,但当任何文件大于5 MB时,上传失败,并出现以下异常:

System.Net.WebException:请求已中止:请求已取消.---> System.IO.IOException:在写入所有字节之前无法关闭流.在System.Net.ConnectStream.CloseInternal(布尔内部调用,布尔中止)---内部异常堆栈跟踪结束---在Amazon.S3.AmazonS3Client.ProcessRequestError(字符串actionName,HttpWebRequest请求,WebException we,HttpWebResponse errorResponse,String requestAddr ,WebSeaderCollection&respHdrs,类型t)在Amazon.S3.AmazonS3Client.PutObject(PutObjectRequest请求)的Amazon.S3.AmazonS3Client.Invoke [T](S3Request userRequest)处于W的BackupToolkit.S3Module.UploadFile(String sourceFileName,String destinationFileName):\code\AutoBackupTool\BackupToolkit\S3Module.cs:BackupToolkit.S3Module的第88行.

注意:5 MB大致是失败的边界,它可以略低或更高

我假设连接超时,并在文件上载完成之前自动关闭流.

我试图找到一种设置长超时的方法(但我找不到其中的选项AmazonS3AmazonS3Config).

关于如何增加超时的任何想法(如我可以使用的应用程序范围设置)还是与超时问题无关?


码:

var s3Client = AWSClientFactory.CreateAmazonS3Client(AwsAccessKey, AwsSecretKey);

var putObjectRequest = new PutObjectRequest {

    BucketName            = Bucket,
    FilePath              = sourceFileName,
    Key                   = destinationFileName,
    MD5Digest             = md5Base64,
    GenerateMD5Digest     = true
};

using (var upload = s3Client.PutObject(putObjectRequest)) {  }
Run Code Online (Sandbox Code Playgroud)

Inv*_*ion 42

更新的答案:

我最近更新了一个使用Amazon AWS .NET SDK(到版本1.4.1.0)的项目,在这个版本中,当我在这里写下原始答案时,有两个改进是不存在的.

  1. 现在,您可以设置Timeout-1到有放置操作无限的时间限制.
  2. 现在有一个额外的属性PutObjectRequest被调用ReadWriteTimeout,可以设置(以毫秒为单位)在与整个put操作级别相对的流读/写级别上超时.

所以我的代码现在看起来像这样:

var putObjectRequest = new PutObjectRequest {

    BucketName            = Bucket,
    FilePath              = sourceFileName,
    Key                   = destinationFileName,
    MD5Digest             = md5Base64,
    GenerateMD5Digest     = true,
    Timeout               = -1,
    ReadWriteTimeout      = 300000     // 5 minutes in milliseconds
};
Run Code Online (Sandbox Code Playgroud)

原始答案:


我设法找到答案......

在发布我曾探索过的问题之前AmazonS3,AmazonS3Config但没有PutObjectRequest.

里面PutObjectRequest有一个Timeout属性(以毫秒为单位).我已成功使用它来上传较大的文件(注意:将其设置为0不会删除超时,您需要指定正数毫秒...我已经走了1小时).

这很好用:

var putObjectRequest = new PutObjectRequest {

    BucketName            = Bucket,
    FilePath              = sourceFileName,
    Key                   = destinationFileName,
    MD5Digest             = md5Base64,
    GenerateMD5Digest     = true,
    Timeout               = 3600000
};
Run Code Online (Sandbox Code Playgroud)


Nic*_*ell 10

我一直遇到类似的问题,并开始使用TransferUtility类来执行多部分上传.

此代码正在运行.当超时设置得太低时我确实遇到了问题!

                var request = new TransferUtilityUploadRequest()
                .WithBucketName(BucketName)
                .WithFilePath(sourceFile.FullName)
                .WithKey(key)
                .WithTimeout(100 * 60 * 60 * 1000)
                .WithPartSize(10 * 1024 * 1024)
                .WithSubscriber((src, e) =>
                {
                    Console.CursorLeft = 0;
                    Console.Write("{0}: {1} of {2}    ", sourceFile.Name, e.TransferredBytes, e.TotalBytes);
                });
            utility.Upload(request);
Run Code Online (Sandbox Code Playgroud)

当我输入这个时,我上传了4GB的内容,而且它已经比以往任何时候都更进一步!


Mal*_*lil 7

AWS SDK for .NET有两个主要API可与Amazon S3配合使用.它们可以在S3上传大小文件.

1.低级API:

低级API使用与SDK中其他服务低级API相同的模式.有一个名为AmazonS3Client的客户端对象 ,它实现了IAmazonS3接口.它包含 S3公开的每个服务操作的方法.

命名空间:Amazon.S3,Amazon.S3.Model

// Step 1 : 
AmazonS3Config s3Config = new AmazonS3Config();
s3Config.RegionEndpoint = GetRegionEndPoint();

// Step 2 :
using(var client = new AmazonS3Client(My_AWSAccessKey, My_AWSSecretKey, s3Config) )
{
    // Step 3 :
    PutObjectRequest request = new PutObjectRequest();
    request.Key = My_key;
    request.InputStream = My_fileStream;
    request.BucketName = My_BucketName;

    // Step 4 : Finally place object to S3
    client.PutObject(request);
}
Run Code Online (Sandbox Code Playgroud)

2. TransferUtility :( 我建议使用此API)

TransferUtility运行在低级API之上.为了将对象放入S3中​​,它是一个简单的界面,用于处理S3的最常见用途.放置物体最大的好处.例如,TransferUtility检测文件是否很大并切换到分段上传模式.

命名空间:Amazon.S3.Transfer

// Step 1 : Create "Transfer Utility" (replacement of old "Transfer Manager")
TransferUtility fileTransferUtility =
     new TransferUtility(new AmazonS3Client(Amazon.RegionEndpoint.USEast1));

// Step 2 : Create Request object
TransferUtilityUploadRequest uploadRequest =
    new TransferUtilityUploadRequest
    {
        BucketName = My_BucketName,
        FilePath = My_filePath, 
        Key = My_keyName
    };

// Step 3 : Event Handler that will be automatically called on each transferred byte 
uploadRequest.UploadProgressEvent +=
    new EventHandler<UploadProgressArgs>
        (uploadRequest_UploadPartProgressEvent);

static void uploadRequest_UploadPartProgressEvent(object sender, UploadProgressArgs e)
{    
    Console.WriteLine("{0}/{1}", e.TransferredBytes, e.TotalBytes);
}

// Step 4 : Hit upload and send data to S3
fileTransferUtility.Upload(uploadRequest);
Run Code Online (Sandbox Code Playgroud)


use*_*142 6

尼克兰德尔对此有了正确的想法,更进一步的是他的帖子这里有另一个例子,有一些替代的事件处理,以及一个方法来获得上传文件的百分比:

        private static string WritingLargeFile(AmazonS3 client, int mediaId, string bucketName, string amazonKey, string fileName, string fileDesc, string fullPath)
    {
        try
        {

            Log.Add(LogTypes.Debug, mediaId, "WritingLargeFile: Create TransferUtilityUploadRequest");
            var request = new TransferUtilityUploadRequest()
                .WithBucketName(bucketName)
                .WithKey(amazonKey)
                .WithMetadata("fileName", fileName)
                .WithMetadata("fileDesc", fileDesc)
                .WithCannedACL(S3CannedACL.PublicRead)
                .WithFilePath(fullPath)
                .WithTimeout(100 * 60 * 60 * 1000) //100 min timeout
                .WithPartSize(5 * 1024 * 1024); // Upload in 5MB pieces 

            request.UploadProgressEvent += new EventHandler<UploadProgressArgs>(uploadRequest_UploadPartProgressEvent);

            Log.Add(LogTypes.Debug, mediaId, "WritingLargeFile: Create TransferUtility");
            TransferUtility fileTransferUtility = new TransferUtility(ConfigurationManager.AppSettings["AWSAccessKey"], ConfigurationManager.AppSettings["AWSSecretKey"]);

            Log.Add(LogTypes.Debug, mediaId, "WritingLargeFile: Start Upload");
            fileTransferUtility.Upload(request);

            return amazonKey;
        }
        catch (AmazonS3Exception amazonS3Exception)
        {
            if (amazonS3Exception.ErrorCode != null &&
                (amazonS3Exception.ErrorCode.Equals("InvalidAccessKeyId") ||
                amazonS3Exception.ErrorCode.Equals("InvalidSecurity")))
            {
                Log.Add(LogTypes.Debug, mediaId, "Please check the provided AWS Credentials.");
            }
            else
            {
                Log.Add(LogTypes.Debug, mediaId, String.Format("An error occurred with the message '{0}' when writing an object", amazonS3Exception.Message));
            }
            return String.Empty; //Failed
        }
    }

    private static Dictionary<string, int> uploadTracker = new Dictionary<string, int>();
    static void uploadRequest_UploadPartProgressEvent(object sender, UploadProgressArgs e)
    {
        TransferUtilityUploadRequest req = sender as TransferUtilityUploadRequest;          
        if (req != null)
        {
            string fileName = req.FilePath.Split('\\').Last();
            if (!uploadTracker.ContainsKey(fileName))
                uploadTracker.Add(fileName, e.PercentDone);

            //When percentage done changes add logentry:
            if (uploadTracker[fileName] != e.PercentDone)
            {
                uploadTracker[fileName] = e.PercentDone;
                Log.Add(LogTypes.Debug, 0, String.Format("WritingLargeFile progress: {1} of {2} ({3}%) for file '{0}'", fileName, e.TransferredBytes, e.TotalBytes, e.PercentDone));
            }
        }

    }

    public static int GetAmazonUploadPercentDone(string fileName)
    {
        if (!uploadTracker.ContainsKey(fileName))
            return 0;

        return uploadTracker[fileName];
    }
Run Code Online (Sandbox Code Playgroud)