TFa*_*aws 5 amazon-web-services kubernetes terraform
我正在尝试部署具有自我管理节点组的集群。无论我使用什么配置选项,我总是会出现以下错误:
\n\n\n错误:发布\n“ http://localhost/api/v1/namespaces/kube-system/configmaps ”:\ndial tcp 127.0.0.1:80:连接:连接被拒绝\nmodule.eks-ssp.kubernetes_config_map.aws_auth[ 0]on\n.terraform/modules/eks-ssp/aws-auth-configmap.tf 第 19 行,在资源\n"kubernetes_config_map" "aws_auth":resource\n"kubernetes_config_map" "aws_auth" {
\n
\xe2\x80\x8b
\n该.tf文件如下所示:
module "eks-ssp" {\nsource = "github.com/aws-samples/aws-eks-accelerator-for-terraform"\n\n# EKS CLUSTER\ntenant = "DevOpsLabs2"\nenvironment = "dev-test"\nzone = ""\nterraform_version = "Terraform v1.1.4"\n\n# EKS Cluster VPC and Subnet mandatory config\nvpc_id = "xxx"\nprivate_subnet_ids = ["xxx","xxx", "xxx", "xxx"]\n\n# EKS CONTROL PLANE VARIABLES\ncreate_eks = true\nkubernetes_version = "1.19"\n\n# EKS SELF MANAGED NODE GROUPS\nself_managed_node_groups = {\nself_mg = {\nnode_group_name = "DevOpsLabs2"\nsubnet_ids = ["xxx","xxx", "xxx", "xxx"]\ncreate_launch_template = true\nlaunch_template_os = "bottlerocket" # amazonlinux2eks or bottlerocket or windows\ncustom_ami_id = "xxx"\npublic_ip = true # Enable only for public subnets\npre_userdata = <<-EOT\nyum install -y amazon-ssm-agent \\\nsystemctl enable amazon-ssm-agent && systemctl start amazon-ssm-agent \\\nEOT\n\ndisk_size = 20\ninstance_type = "t2.small"\ndesired_size = 2\nmax_size = 10\nmin_size = 2\ncapacity_type = "" # Optional Use this only for SPOT capacity as capacity_type = "spot"\n\nk8s_labels = {\nEnvironment = "dev-test"\nZone = ""\nWorkerType = "SELF_MANAGED_ON_DEMAND"\n}\n\nadditional_tags = {\nExtraTag = "t2x-on-demand"\nName = "t2x-on-demand"\nsubnet_type = "public"\n}\ncreate_worker_security_group = false # Creates a dedicated sec group for this Node Group\n},\n}\n}\n\nmodule "eks-ssp-kubernetes-addons" {\nsource = "github.com/aws-samples/aws-eks-accelerator-for-terraform//modules/kubernetes-addons"\n\neks_cluster_id = module.eks-ssp.eks_cluster_id\n\n# EKS Addons\nenable_amazon_eks_vpc_cni = true\nenable_amazon_eks_coredns = true\nenable_amazon_eks_kube_proxy = true\nenable_amazon_eks_aws_ebs_csi_driver = true\n\n#K8s Add-ons\nenable_aws_load_balancer_controller = true\nenable_metrics_server = true\nenable_cluster_autoscaler = true\nenable_aws_for_fluentbit = true\nenable_argocd = true\nenable_ingress_nginx = true\n\ndepends_on = [module.eks-ssp.self_managed_node_groups]\n}\nRun Code Online (Sandbox Code Playgroud)\n提供商:
\nterraform {\n\n backend "remote" {}\n\n required_providers {\n aws = {\n source = "hashicorp/aws"\n version = ">= 3.66.0"\n }\n kubernetes = {\n source = "hashicorp/kubernetes"\n version = ">= 2.6.1"\n }\n helm = {\n source = "hashicorp/helm"\n version = ">= 2.4.1"\n }\n }\n}\nRun Code Online (Sandbox Code Playgroud)\n
根据 Github 存储库 [1] 中提供的示例,我的猜测是provider缺少配置块,无法按预期工作。查看问题中提供的代码,似乎需要添加以下内容:
data "aws_region" "current" {}
data "aws_eks_cluster" "cluster" {
name = module.eks-ssp.eks_cluster_id
}
data "aws_eks_cluster_auth" "cluster" {
name = module.eks-ssp.eks_cluster_id
}
provider "aws" {
region = data.aws_region.current.id
alias = "default" # this should match the named profile you used if at all
}
provider "kubernetes" {
experiments {
manifest_resource = true
}
host = data.aws_eks_cluster.cluster.endpoint
cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
token = data.aws_eks_cluster_auth.cluster.token
}
Run Code Online (Sandbox Code Playgroud)
如果helm还需要,我认为还需要添加以下块[2]:
provider "helm" {
kubernetes {
host = data.aws_eks_cluster.cluster.endpoint
token = data.aws_eks_cluster_auth.cluster.token
cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
}
}
Run Code Online (Sandbox Code Playgroud)
kubernetes和的提供程序参数参考helm分别在 [3] 和 [4] 中。
[3] https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs#argument-reference
[4] https://registry.terraform.io/providers/hashicorp/helm/latest/docs#argument-reference
| 归档时间: |
|
| 查看次数: |
19905 次 |
| 最近记录: |