Switch terraform 0.12.6 to 0.13.0 给了我 provider["registry.terraform.io/-/null"] 是必需的,但它已被删除

Dmi*_*nov 12 terraform terraform-provider-aws amazon-eks

我在远程 terraform-cloud 中管理状态

我已经下载并安装了最新的 terraform 0.13 CLI

然后我删除了.terraform

然后我跑terraform init了,没有错误

然后我做了

? terraform apply -var-file env.auto.tfvars

Error: Provider configuration not present

To work with
module.kubernetes.module.eks-cluster.data.null_data_source.node_groups[0] its
original provider configuration at provider["registry.terraform.io/-/null"] is
required, but it has been removed. This occurs when a provider configuration
is removed while objects created by that provider still exist in the state.
Re-add the provider configuration to destroy
module.kubernetes.module.eks-cluster.data.null_data_source.node_groups[0],
after which you can remove the provider configuration again.

Releasing state lock. This may take a few moments...
Run Code Online (Sandbox Code Playgroud)

这是模块/kubernetes/main.tf 的内容

###################################################################################
# EKS CLUSTER                                                                     #
#                                                                                 #
# This module contains configuration for EKS cluster running various applications #
###################################################################################

module "eks_label" {
  source      = "git::https://github.com/cloudposse/terraform-null-label.git?ref=master"
  namespace   = var.project
  environment = var.environment
  attributes  = [var.component]
  name        = "eks"
}


#
# Local computed variables
#
locals {
  names = {
    secretmanage_policy = "secretmanager-${var.environment}-policy"
  }
}

data "aws_eks_cluster" "cluster" {
  name = module.eks-cluster.cluster_id
}

data "aws_eks_cluster_auth" "cluster" {
  name = module.eks-cluster.cluster_id
}

provider "kubernetes" {
  host                   = data.aws_eks_cluster.cluster.endpoint
  cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
  token                  = data.aws_eks_cluster_auth.cluster.token
  load_config_file       = false
  version                = "~> 1.9"
}

module "eks-cluster" {
  source          = "terraform-aws-modules/eks/aws"
  cluster_name    = module.eks_label.id
  cluster_version = var.cluster_version
  subnets         = var.subnets
  vpc_id          = var.vpc_id

  worker_groups = [
    {
      instance_type = var.cluster_node_type
      asg_max_size  = var.cluster_node_count
    }
  ]

  tags = var.tags
}

# Grant secretmanager access to all pods inside kubernetes cluster
# TODO:
# Adjust implementation so that the policy is template based and we only allow
# kubernetes access to a single key based on the environment.
# we should export key from modules/secrets and then grant only specific ARN access
# so that only production cluster is able to read production secrets but not dev or staging
# https://docs.aws.amazon.com/secretsmanager/latest/userguide/auth-and-access_identity-based-policies.html#permissions_grant-get-secret-value-to-one-secret
resource "aws_iam_policy" "secretmanager-policy" {
  name        = local.names.secretmanage_policy
  description = "allow to read secretmanager secrets ${var.environment}"
  policy      = file("modules/kubernetes/policies/secretmanager.json")
}

#
# Attache the policy to k8s worker role
#
resource "aws_iam_role_policy_attachment" "attach" {
  role       = module.eks-cluster.worker_iam_role_name
  policy_arn = aws_iam_policy.secretmanager-policy.arn
}

#
# Attache the S3 Policy to Workers
# So we can use aws commands inside pods easily if/when needed
#
resource "aws_iam_role_policy_attachment" "attach-s3" {
  role       = module.eks-cluster.worker_iam_role_name
  policy_arn = "arn:aws:iam::aws:policy/AmazonS3FullAccess"
}
Run Code Online (Sandbox Code Playgroud)

小智 17

此修复程序的所有功劳都归于在 cloudposse slack 频道上提到这一点的人:

terraform state replace-provider -auto-approve -- -/null registry.terraform.io/hashicorp/null

这解决了我的这个错误问题,解决了下一个错误。所有这些都是为了升级 terraform 上的版本。


sam*_*ler 13

对我们来说,我们更新了我们在代码中使用的所有提供程序 URL,如下所示:

terraform state replace-provider 'registry.terraform.io/-/null' 
'registry.terraform.io/hashicorp/null'
terraform state replace-provider 'registry.terraform.io/-/archive' 
'registry.terraform.io/hashicorp/archive'
terraform state replace-provider 'registry.terraform.io/-/aws' 
'registry.terraform.io/hashicorp/aws'
Run Code Online (Sandbox Code Playgroud)

我想在替换时非常具体,所以我在替换新 URL 时使用了损坏的 URL。

更具体地说,这仅适用于terraform 13

https://www.terraform.io/docs/providers/index.html#providers-in-the-terraform-registry


小智 0

当\xe2\x80\x99s 处于最新 Terraform 状态的对象不再处于配置中,但 Terraform 无法\xe2\x80\x99t 销毁它(如通常预期的那样)时,会出现此错误,因为这样做的提供程序配置也不存在\xe2\x80\x99。

\n

解决方案:

\n
\n

仅当您\xe2\x80\x99最近删除了对象\n“data.null_data_source”以及提供程序“null”块时,才会出现这种情况。要继续执行此操作you\xe2\x80\x99ll need to temporarily restore that provider "null" block,请运行 terraform apply to have Terraform destroy object data "null_data_source",然后您可以删除提供程序“null”块,因为不再需要它\xe2\x80\x99。

\n
\n