Terraform-每次申请时将文件上传到S3

Mut*_* PL 3 amazon-s3 amazon-web-services terraform terraform-provider-aws

我需要将文件夹上传到S3 Bucket。但是当我第一次申请时。它只是上传。但是我这里有两个问题:

  1. 上传的版本输出为null。我希望有一些version_id,例如1、2、3
  2. terraform apply再次运行时,显示Apply complete! Resources: 0 added, 0 changed, 0 destroyed。我希望在运行terraform apply并创建新版本时始终上传所有内容。

我究竟做错了什么?这是我的Terraform配置:

resource "aws_s3_bucket" "my_bucket" {
  bucket = "my_bucket_name"

  versioning {
    enabled = true
  }
}

resource "aws_s3_bucket_object" "file_upload" {
  bucket = "my_bucket"
  key    = "my_bucket_key"
  source = "my_files.zip"
}

output "my_bucket_file_version" {
  value = "${aws_s3_bucket_object.file_upload.version_id}"
}
Run Code Online (Sandbox Code Playgroud)

sdg*_*sdh 16

现在首选的解决方案是使用该source_hash属性。请注意,aws_s3_bucket_object已替换为aws_s3_object.

locals {
  object_source = "${path.module}/my_files.zip"
}

resource "aws_s3_object" "file_upload" {
  bucket      = "my_bucket"
  key         = "my_bucket_key"
  source      = local.object_source
  source_hash = filemd5(local.object_source)
}
Run Code Online (Sandbox Code Playgroud)

请注意,etag使用加密时可能会出现问题。


Mar*_*ins 6

Terraform only makes changes to the remote objects when it detects a difference between the configuration and the remote object attributes. In the configuration as you've written it so far, the configuration includes only the filename. It includes nothing about the content of the file, so Terraform can't react to the file changing.

To make subsequent changes, there are a few options:

  • You could use a different local filename for each new version.
  • You could use a different remote object path for each new version.
  • You can use the object etag to let Terraform recognize when the content has changed, regardless of the local filename or object path.

The final of these seems closest to what you want in this case. To do that, add the etag argument and set it to be an MD5 hash of the file:

resource "aws_s3_bucket_object" "file_upload" {
  bucket = "my_bucket"
  key    = "my_bucket_key"
  source = "${path.module}/my_files.zip"
  etag   = "${filemd5("${path.module}/my_files.zip")}"
}
Run Code Online (Sandbox Code Playgroud)

With that extra argument in place, Terraform will detect when the MD5 hash of the file on disk is different than that stored remotely in S3 and will plan to update the object accordingly.


(I'm not sure what's going on with version_id. It should work as long as versioning is enabled on the bucket.)