InvalidParameterException:不支持指定的插件版本

TFa*_*aws 2 amazon-web-services kubernetes terraform terraform-provider-aws amazon-eks

我已经尝试部署自我管理节点 EKS 集群有一段时间了,但没有成功。我现在遇到的错误是 EKS 插件:

错误:创建 EKS 插件时出错 (DevOpsLabs2b-dev-test--eks:kube-proxy):InvalidParameterException:不支持指定的插件版本,AddonName:“kube-proxy”,ClusterName:“DevOpsLabs2b-dev-test-- eks", Message_: "不支持指定的插件版本" } 在 .terraform/modules/eks-ssp-kubernetes-addons/modules 上使用 module.eks-ssp-kubernetes-addons.module.aws_kube_proxy[0].aws_eks_addon.kube_proxy /kubernetes-addons/aws-kube-proxy/main.tf 第 19 行,在资源“aws_eks_addon”“kube_proxy”中:

coredns 也会重复此错误,但 ebs_csi_driver 会抛出:

错误:创建期间返回意外的 EKS 附加组件 (DevOpsLabs2b-dev-test--eks:aws-ebs-csi-driver) 状态:等待状态变为“ACTIVE”时超时(最后状态:“DEGRADED”,超时: 20m0s) [警告] 再次运行 terraform apply 将删除 kubernetes 插件并尝试再次创建它,有效清除以前的插件配置

我的 main.tf 看起来像这样:

terraform {

  backend "remote" {}

  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = ">= 3.66.0"
    }
    kubernetes = {
      source  = "hashicorp/kubernetes"
      version = ">= 2.7.1"
    }
    helm = {
      source  = "hashicorp/helm"
      version = ">= 2.4.1"
    }
  }
}

data "aws_eks_cluster" "cluster" {
  name = module.eks-ssp.eks_cluster_id
}

data "aws_eks_cluster_auth" "cluster" {
  name = module.eks-ssp.eks_cluster_id
}

provider "aws" {
  access_key = "xxx"
  secret_key = "xxx"
  region     = "xxx"
  assume_role {
    role_arn = "xxx"
  }
}

provider "kubernetes" {
  host                   = data.aws_eks_cluster.cluster.endpoint
  cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
  token                  = data.aws_eks_cluster_auth.cluster.token
}

provider "helm" {
  kubernetes {
    host                   = data.aws_eks_cluster.cluster.endpoint
    token                  = data.aws_eks_cluster_auth.cluster.token
    cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
  }
}
Run Code Online (Sandbox Code Playgroud)

我的 eks.tf 看起来像这样:

module "eks-ssp" {
    source = "github.com/aws-samples/aws-eks-accelerator-for-terraform"

    # EKS CLUSTER
    tenant            = "DevOpsLabs2b"
    environment       = "dev-test"
    zone              = ""
    terraform_version = "Terraform v1.1.4"

    # EKS Cluster VPC and Subnet mandatory config
    vpc_id             = "xxx"
    private_subnet_ids = ["xxx","xxx", "xxx", "xxx"]

    # EKS CONTROL PLANE VARIABLES
    create_eks         = true
    kubernetes_version = "1.19"

  # EKS SELF MANAGED NODE GROUPS
    self_managed_node_groups = {
    self_mg = {
      node_group_name        = "DevOpsLabs2b"
      subnet_ids             = ["xxx","xxx", "xxx", "xxx"]
      create_launch_template = true
      launch_template_os     = "bottlerocket"       # amazonlinux2eks  or bottlerocket or windows
      custom_ami_id          = "xxx"
      public_ip              = true                   # Enable only for public subnets
      pre_userdata           = <<-EOT
            yum install -y amazon-ssm-agent \
            systemctl enable amazon-ssm-agent && systemctl start amazon-ssm-agent \
        EOT

      disk_size     = 10
      instance_type = "t2.small"
      desired_size  = 2
      max_size      = 10
      min_size      = 0
      capacity_type = "" # Optional Use this only for SPOT capacity as  capacity_type = "spot"

      k8s_labels = {
        Environment = "dev-test"
        Zone        = ""
        WorkerType  = "SELF_MANAGED_ON_DEMAND"
      }

      additional_tags = {
        ExtraTag    = "t2x-on-demand"
        Name        = "t2x-on-demand"
        subnet_type = "public"
      }
      create_worker_security_group = false # Creates a dedicated sec group for this Node Group
    },
  }
}

module "eks-ssp-kubernetes-addons" {
    source = "github.com/aws-samples/aws-eks-accelerator-for-terraform//modules/kubernetes-addons"

    eks_cluster_id                        = module.eks-ssp.eks_cluster_id

    # EKS Addons
    enable_amazon_eks_vpc_cni             = true
    enable_amazon_eks_coredns             = true
    enable_amazon_eks_kube_proxy          = true
    enable_amazon_eks_aws_ebs_csi_driver  = true

    #K8s Add-ons
    enable_aws_load_balancer_controller   = true
    enable_metrics_server                 = true
    enable_cluster_autoscaler             = true
    enable_aws_for_fluentbit              = true
    enable_argocd                         = true
    enable_ingress_nginx                  = true

    depends_on = [module.eks-ssp.self_managed_node_groups]
}
Run Code Online (Sandbox Code Playgroud)

我究竟缺少什么?

Mar*_*o E 8

K8s 有时很难做好。Github 上的示例显示了版本1.21[1]。因此,如果你只留下这个:

    enable_amazon_eks_vpc_cni             = true
    enable_amazon_eks_coredns             = true
    enable_amazon_eks_kube_proxy          = true
    enable_amazon_eks_aws_ebs_csi_driver  = true

    #K8s Add-ons
    enable_aws_load_balancer_controller   = true
    enable_metrics_server                 = true
    enable_cluster_autoscaler             = true
    enable_aws_for_fluentbit              = true
    enable_argocd                         = true
    enable_ingress_nginx                  = true
Run Code Online (Sandbox Code Playgroud)

默认下载的镜像是K8s版本的镜像1.21,如[2]所示。如果您确实需要使用 K8s 版本1.19,那么您将必须找到该版本对应的 Helm 图表。以下是如何配置所需图像的示例 [3]:

  amazon_eks_coredns_config = {
    addon_name               = "coredns"
    addon_version            = "v1.8.4-eksbuild.1"
    service_account          = "coredns"
    resolve_conflicts        = "OVERWRITE"
    namespace                = "kube-system"
    service_account_role_arn = ""
    additional_iam_policies  = []
    tags                     = {}
  }
Run Code Online (Sandbox Code Playgroud)

不过,此处的 CoreDNS 版本 ( addon_version = v1.8.4-eksbuild.1) 是与 K8s 一起使用的1.21。要检查您需要的版本1.19,请转到此处 [4]。TL;DR:您需要指定的 CoreDNS 版本是1.8.0. 为了使该附加组件适用1.19于 CoreDNS(以及其他基于图像版本的附加组件),您必须有一个如下代码块:

enable_amazon_eks_coredns             = true
# followed by
  amazon_eks_coredns_config = {
    addon_name               = "coredns"
    addon_version            = "v1.8.0-eksbuild.1"
    service_account          = "coredns"
    resolve_conflicts        = "OVERWRITE"
    namespace                = "kube-system"
    service_account_role_arn = ""
    additional_iam_policies  = []
    tags                     = {}
  }
Run Code Online (Sandbox Code Playgroud)

对于其他 EKS 附加组件,您可以在此处找到更多信息 [5]。如果您单击该列中的链接,Name您将直接进入 AWS EKS 文档,其中包含 AWS 当前支持的 EKS 版本支持的附加映像版本 ( 1.17- 1.21)。

最后但并非最不重要的一点是,一个友好的建议:永远不要通过在块中硬编码访问密钥和秘密访问密钥来配置 AWS 提供商provider。使用命名配置文件 [6] 或仅使用默认配置文件。而不是您当前拥有的块:

provider "aws" {
  access_key = "xxx"
  secret_key = "xxx"
  region     = "xxx"
  assume_role {
    role_arn = "xxx"
  }
}
Run Code Online (Sandbox Code Playgroud)

切换到:

provider "aws" {
  region   = "yourdefaultregion"
  profile  = "yourprofilename"
}
Run Code Online (Sandbox Code Playgroud)

[1] https://github.com/aws-samples/aws-eks-accelerator-for-terraform/blob/main/examples/eks-cluster-with-eks-addons/main.tf#L62

[2] https://github.com/aws-samples/aws-eks-accelerator-for-terraform/blob/main/modules/kubernetes-addons/aws-kube-proxy/local.tf#L5

[3] https://github.com/aws-samples/aws-eks-accelerator-for-terraform/blob/main/examples/eks-cluster-with-eks-addons/main.tf#L148-L157

[4] https://docs.aws.amazon.com/eks/latest/userguide/managing-coredns.html

[5] https://github.com/aws-samples/aws-eks-accelerator-for-terraform/blob/main/docs/add-ons/management-add-ons.md

[6] https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-profiles.html