Bon*_*ond 7 amazon-web-services elasticsearch amazon-iam amazon-cloudwatch terraform
我正在尝试通过Kinesis Firehose将AWS cloudwatch日志流式传输到ES.下面的terraform代码给出了错误.任何建议..错误是:
resource "aws_s3_bucket" "bucket" {
bucket = "cw-kinesis-es-bucket"
acl = "private"
}
resource "aws_iam_role" "firehose_role" {
name = "firehose_test_role"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "firehose.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
}
resource "aws_elasticsearch_domain" "es" {
domain_name = "firehose-es-test"
elasticsearch_version = "1.5"
cluster_config {
instance_type = "t2.micro.elasticsearch"
}
ebs_options {
ebs_enabled = true
volume_size = 10
}
advanced_options {
"rest.action.multi.allow_explicit_index" = "true"
}
access_policies = <<CONFIG
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "es:*",
"Principal": "*",
"Effect": "Allow",
"Condition": {
"IpAddress": {"aws:SourceIp": ["xxxxx"]}
}
}
]
}
CONFIG
snapshot_options {
automated_snapshot_start_hour = 23
}
tags {
Domain = "TestDomain"
}
}
resource "aws_kinesis_firehose_delivery_stream" "test_stream" {
name = "terraform-kinesis-firehose-test-stream"
destination = "elasticsearch"
s3_configuration {
role_arn = "${aws_iam_role.firehose_role.arn}"
bucket_arn = "${aws_s3_bucket.bucket.arn}"
buffer_size = 10
buffer_interval = 400
compression_format = "GZIP"
}
elasticsearch_configuration {
domain_arn = "${aws_elasticsearch_domain.es.arn}"
role_arn = "${aws_iam_role.firehose_role.arn}"
index_name = "test"
type_name = "test"
}
}
resource "aws_iam_role" "iam_for_lambda" {
name = "iam_for_lambda"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "lambda.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
}
resource "aws_cloudwatch_log_subscription_filter" "test_kinesis_logfilter" {
name = "test_kinesis_logfilter"
role_arn = "${aws_iam_role.iam_for_lambda.arn}"
log_group_name = "loggorup.log"
filter_pattern = ""
destination_arn = "${aws_kinesis_firehose_delivery_stream.test_stream.arn}"
}
Run Code Online (Sandbox Code Playgroud)
Mar*_*ins 16
在此配置中,您指示Cloudwatch日志将日志记录发送到Kinesis Firehose,然后Kinesis Firehose将其接收的数据写入S3和ElasticSearch.因此,您使用的AWS服务正在相互通信,如下所示:
为了使一个AWS服务与另一个服务通信,第一个服务必须承担授予其访问权限的角色.在IAM术语中,"假设一个角色"意味着临时使用授予该角色的特权.AWS IAM角色有两个关键部分:
这里需要两个独立的角色.一个角色将授予Cloudwatch Logs与Kinesis Firehose通话的权限,而第二个角色将授予Kinesis Firehose访问权限以与S3和ElasticSearch进行通信.
对于本答复的其余部分,我将假设Terraform作为具有对AWS账户的完全管理访问权限的用户运行.如果不是这样,首先需要确保Terraform作为有权创建和传递角色的IAM主体运行.
在这个问题给出的例子中,aws_cloudwatch_log_subscription_filter有role_arn其assume_role_policy是AWS LAMBDA,所以CloudWatch的日志不能访问承担这一角色.
要解决此问题,可以更改假定角色策略以使用Cloudwatch日志的服务名称:
resource "aws_iam_role" "cloudwatch_logs" {
name = "cloudwatch_logs_to_firehose"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "logs.us-east-1.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
}
Run Code Online (Sandbox Code Playgroud)
以上允许Cloudwatch Logs服务承担该角色.现在该角色需要一个允许写入Firehose Delivery Stream的访问策略:
resource "aws_iam_role_policy" "cloudwatch_logs" {
role = "${aws_iam_role.cloudwatch_logs.name}"
policy = <<EOF
{
"Statement": [
{
"Effect": "Allow",
"Action": ["firehose:*"],
"Resource": ["${aws_kinesis_firehose_delivery_stream.test_stream.arn}"]
}
]
}
EOF
}
Run Code Online (Sandbox Code Playgroud)
以上内容授予Cloudwatch Logs服务访问权限,只要它针对此Terraform配置创建的特定传输流,就可以调用任何 Kinesis Firehose操作.这比完全必要的访问更多; 有关更多信息,请参阅Amazon Kinesis Firehose的操作和条件上下文密钥.
要完成此操作,aws_cloudwatch_log_subscription_filter必须更新资源以引用此新角色:
resource "aws_cloudwatch_log_subscription_filter" "test_kinesis_logfilter" {
name = "test_kinesis_logfilter"
role_arn = "${aws_iam_role.cloudwatch_logs.arn}"
log_group_name = "loggorup.log"
filter_pattern = ""
destination_arn = "${aws_kinesis_firehose_delivery_stream.test_stream.arn}"
# Wait until the role has required access before creating
depends_on = ["aws_iam_role_policy.cloudwatch_logs"]
}
Run Code Online (Sandbox Code Playgroud)
不幸的是,由于AWS IAM的内部设计,在Terraform提交之后,策略更改通常需要几分钟才能生效,因此有时在尝试使用策略创建新资源时会发生与策略相关的错误在策略本身创建之后.在这种情况下,通常只需等待10分钟然后再次运行Terraform就足够了,此时它应该从中断处继续并重试创建资源.
问题中给出的示例已经具有IAM角色,并为Kinesis Firehose提供了合适的假设角色策略:
resource "aws_iam_role" "firehose_role" {
name = "firehose_test_role"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "firehose.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
}
Run Code Online (Sandbox Code Playgroud)
以上授予Kinesis Firehose访问权以承担此角色.和以前一样,此角色还需要一个访问策略来授予用户对目标S3存储桶的角色访问权限:
resource "aws_iam_role_policy" "firehose_role" {
role = "${aws_iam_role.firehose_role.name}"
policy = <<EOF
{
"Statement": [
{
"Effect": "Allow",
"Action": ["s3:*"],
"Resource": ["${aws_s3_bucket.bucket.arn}"]
},
{
"Effect": "Allow",
"Action": ["es:ESHttpGet"],
"Resource": ["${aws_elasticsearch_domain.es.arn}/*"]
},
{
"Effect": "Allow",
"Action": [
"logs:PutLogEvents"
],
"Resource": [
"arn:aws:logs:*:*:log-group:*:log-stream:*"
]
}
]
}
EOF
}
Run Code Online (Sandbox Code Playgroud)
上述策略允许Kinesis Firehose对创建的S3存储桶执行任何操作,对创建的ElasticSearch域执行任何操作,并将日志事件写入Cloudwatch日志中的任何日志流.最后一部分并非绝对必要,但如果为Firehose Delivery Stream启用了日志记录,则很重要,否则Kinesis Firehose无法将日志写回Cloudwatch日志.
同样,这比完全必要的访问更多.有关支持的特定操作的详细信息,请参阅以下参考:
由于此单个角色具有写入S3和ElasticSearch的访问权限,因此可以在Kinesis Firehose传输流中为这两个传递配置指定它:
resource "aws_kinesis_firehose_delivery_stream" "test_stream" {
name = "terraform-kinesis-firehose-test-stream"
destination = "elasticsearch"
s3_configuration {
role_arn = "${aws_iam_role.firehose_role.arn}"
bucket_arn = "${aws_s3_bucket.bucket.arn}"
buffer_size = 10
buffer_interval = 400
compression_format = "GZIP"
}
elasticsearch_configuration {
domain_arn = "${aws_elasticsearch_domain.es.arn}"
role_arn = "${aws_iam_role.firehose_role.arn}"
index_name = "test"
type_name = "test"
}
# Wait until access has been granted before creating the firehose
# delivery stream.
depends_on = ["aws_iam_role_policy.firehose_role"]
}
Run Code Online (Sandbox Code Playgroud)
完成上述所有布线后,服务应具有连接此交付管道部件所需的访问权限.
这种相同的通用模式适用于两个AWS服务之间的任何连接.每个案例所需的重要信息是:
logs.us-east-1.amazonaws.com或firehose.amazonaws.com.遗憾的是,这些文档通常记录不清,很难找到,但通常可以在每个服务的用户指南中的策略示例中找到.| 归档时间: |
|
| 查看次数: |
4686 次 |
| 最近记录: |