Eri*_*edo 3 amazon-s3 go aws-sdk-go
我正在尝试使用 golang sdk 将对象上传到 AWS S3,而无需在我的系统中创建文件(尝试仅上传字符串)。但我很难做到这一点。谁能给我一个例子,说明如何在不需要创建文件的情况下上传到 AWS S3?
如何上传文件的 AWS 示例:
// Creates a S3 Bucket in the region configured in the shared config
// or AWS_REGION environment variable.
//
// Usage:
// go run s3_upload_object.go BUCKET_NAME FILENAME
func main() {
if len(os.Args) != 3 {
exitErrorf("bucket and file name required\nUsage: %s bucket_name filename",
os.Args[0])
}
bucket := os.Args[1]
filename := os.Args[2]
file, err := os.Open(filename)
if err != nil {
exitErrorf("Unable to open file %q, %v", err)
}
defer file.Close()
// Initialize a session in us-west-2 that the SDK will use to load
// credentials from the shared credentials file ~/.aws/credentials.
sess, err := session.NewSession(&aws.Config{
Region: aws.String("us-west-2")},
)
// Setup the S3 Upload Manager. Also see the SDK doc for the Upload Manager
// for more information on configuring part size, and concurrency.
//
// http://docs.aws.amazon.com/sdk-for-go/api/service/s3/s3manager/#NewUploader
uploader := s3manager.NewUploader(sess)
// Upload the file's body to S3 bucket as an object with the key being the
// same as the filename.
_, err = uploader.Upload(&s3manager.UploadInput{
Bucket: aws.String(bucket),
// Can also use the `filepath` standard library package to modify the
// filename as need for an S3 object key. Such as turning absolute path
// to a relative path.
Key: aws.String(filename),
// The file to be uploaded. io.ReadSeeker is preferred as the Uploader
// will be able to optimize memory when uploading large content. io.Reader
// is supported, but will require buffering of the reader's bytes for
// each part.
Body: file,
})
if err != nil {
// Print the error and exit.
exitErrorf("Unable to upload %q to %q, %v", filename, bucket, err)
}
fmt.Printf("Successfully uploaded %q to %q\n", filename, bucket)
}
Run Code Online (Sandbox Code Playgroud)
我已经尝试以编程方式创建文件,但它正在我的系统上创建文件,然后将其上传到 S3。
在这个答案中,我将发布与这个问题相关的所有对我有用的东西。非常感谢@ThunderCat 和@Flimzy 提醒我上传请求的正文参数已经是一个io.Reader。我将发布一些示例代码,评论我从这个问题中学到的东西以及它如何帮助我解决这个问题。也许这会帮助像我和@AlokKumarSingh 这样的人。
案例 1:您已经在内存中拥有数据(例如,从 Kafka、Kinesis 或 SQS 等流/消息服务接收数据)
func main() {
if len(os.Args) != 3 {
fmt.Printf(
"bucket and file name required\nUsage: %s bucket_name filename",
os.Args[0],
)
}
bucket := os.Args[1]
filename := os.Args[2]
// this is your data that you have in memory
// in this example it is hard coded but it may come from very distinct
// sources, like streaming services for example.
data := "Hello, world!"
// create a reader from data data in memory
reader := strings.NewReader(data)
sess, err := session.NewSession(&aws.Config{
Region: aws.String("us-east-1")},
)
uploader := s3manager.NewUploader(sess)
_, err = uploader.Upload(&s3manager.UploadInput{
Bucket: aws.String(bucket),
Key: aws.String(filename),
// here you pass your reader
// the aws sdk will manage all the memory and file reading for you
Body: reader,
})
if err != nil {.
fmt.Printf("Unable to upload %q to %q, %v", filename, bucket, err)
}
fmt.Printf("Successfully uploaded %q to %q\n", filename, bucket)
}
Run Code Online (Sandbox Code Playgroud)
案例 2:您已经有一个持久化文件,您想上传它,但又不想在内存中维护整个文件:
func main() {
if len(os.Args) != 3 {
fmt.Printf(
"bucket and file name required\nUsage: %s bucket_name filename",
os.Args[0],
)
}
bucket := os.Args[1]
filename := os.Args[2]
// open your file
// the trick here is that the method os.Open just returns for you a reader
// for the desired file, so you will not maintain the whole file in memory.
// I know this might sound obvious, but for a starter (as I was at the time
// of the question) it is not.
fileReader, err := os.Open(filename)
if err != nil {
fmt.Printf("Unable to open file %q, %v", err)
}
defer fileReader.Close()
sess, err := session.NewSession(&aws.Config{
Region: aws.String("us-east-1")},
)
uploader := s3manager.NewUploader(sess)
_, err = uploader.Upload(&s3manager.UploadInput{
Bucket: aws.String(bucket),
Key: aws.String(filename),
// here you pass your reader
// the aws sdk will manage all the memory and file reading for you
Body: fileReader,
})
if err != nil {
fmt.Printf("Unable to upload %q to %q, %v", filename, bucket, err)
}
fmt.Printf("Successfully uploaded %q to %q\n", filename, bucket)
}
Run Code Online (Sandbox Code Playgroud)
案例 3:这就是我在我的系统的最终版本上实现它的方式,但是为了理解我为什么这样做,我必须给你一些背景知识。
我的用例有所发展。上传代码将成为 Lambda 中的一个函数,结果文件非常庞大。此更改意味着什么:如果我通过 API 网关中附加到 Lambda 函数的入口点上传文件,我将不得不等待整个文件在 Lambda 中完成上传。由于 lambda 是由调用的持续时间和内存使用量来定价的,这可能是一个非常大的问题。
所以,为了解决这个问题,我使用了一个预先签名的帖子 URL 来上传。这如何影响架构/工作流程?
我没有从后端代码上传到 S3,而是创建并验证了一个 URL,用于将对象发布到后端的 S3 并将此 URL 发送到前端。有了这个,我刚刚实现了对该 URL 的分段上传。我知道这比问题要具体得多,但要发现这个解决方案并不容易,所以我认为在这里为其他人记录下来是个好主意。
以下是如何在nodejs 中创建该预签名 URL 的示例。
const AWS = require('aws-sdk');
module.exports.upload = async (event, context, callback) => {
const s3 = new AWS.S3({ signatureVersion: 'v4' });
const body = JSON.parse(event.body);
const params = {
Bucket: process.env.FILES_BUCKET_NAME,
Fields: {
key: body.filename,
},
Expires: 60 * 60
}
let promise = new Promise((resolve, reject) => {
s3.createPresignedPost(params, (err, data) => {
if (err) {
reject(err);
} else {
resolve(data);
}
});
})
return await promise
.then((data) => {
return {
statusCode: 200,
body: JSON.stringify({
message: 'Successfully created a pre-signed post url.',
data: data,
})
}
})
.catch((err) => {
return {
statusCode: 400,
body: JSON.stringify({
message: 'An error occurred while trying to create a pre-signed post url',
error: err,
})
}
});
};
Run Code Online (Sandbox Code Playgroud)
如果你想使用go,它的想法是一样的,你只需要改变 de sdk。
| 归档时间: |
|
| 查看次数: |
7914 次 |
| 最近记录: |