微信公众号搜"智元新知"关注
微信扫一扫可直接关注哦!

将terraform 0.12.6切换到0.13.0可以提供provider [“ registry.terraform.io /-/ null”],但已将其删除

如何解决将terraform 0.12.6切换到0.13.0可以提供provider [“ registry.terraform.io /-/ null”],但已将其删除

我在远程terraform-cloud中管理状态

我已经下载并安装了最新的terraform 0.13 CLI

然后我删除 .terraform

然后我跑了terraform init并没有出现错误

然后我做了

terraform apply -var-file env.auto.tfvars

Error: Provider configuration not present

To work with
module.kubernetes.module.eks-cluster.data.null_data_source.node_groups[0] its
original provider configuration at provider["registry.terraform.io/-/null"] is
required,but it has been removed. This occurs when a provider configuration
is removed while objects created by that provider still exist in the state.
Re-add the provider configuration to destroy
module.kubernetes.module.eks-cluster.data.null_data_source.node_groups[0],after which you can remove the provider configuration again.

Releasing state lock. This may take a few moments...

这是模块/kubernetes/main.tf的内容

###################################################################################
# EKS CLUSTER                                                                     #
#                                                                                 #
# This module contains configuration for EKS cluster running varIoUs applications #
###################################################################################

module "eks_label" {
  source      = "git::https://github.com/cloudposse/terraform-null-label.git?ref=master"
  namespace   = var.project
  environment = var.environment
  attributes  = [var.component]
  name        = "eks"
}


#
# Local computed variables
#
locals {
  names = {
    secretmanage_policy = "secretmanager-${var.environment}-policy"
  }
}

data "aws_eks_cluster" "cluster" {
  name = module.eks-cluster.cluster_id
}

data "aws_eks_cluster_auth" "cluster" {
  name = module.eks-cluster.cluster_id
}

provider "kubernetes" {
  host                   = data.aws_eks_cluster.cluster.endpoint
  cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
  token                  = data.aws_eks_cluster_auth.cluster.token
  load_config_file       = false
  version                = "~> 1.9"
}

module "eks-cluster" {
  source          = "terraform-aws-modules/eks/aws"
  cluster_name    = module.eks_label.id
  cluster_version = var.cluster_version
  subnets         = var.subnets
  vpc_id          = var.vpc_id

  worker_groups = [
    {
      instance_type = var.cluster_node_type
      asg_max_size  = var.cluster_node_count
    }
  ]

  tags = var.tags
}

# Grant secretmanager access to all pods inside kubernetes cluster
# Todo:
# Adjust implementation so that the policy is template based and we only allow
# kubernetes access to a single key based on the environment.
# we should export key from modules/secrets and then grant only specific ARN access
# so that only production cluster is able to read production secrets but not dev or staging
# https://docs.aws.amazon.com/secretsmanager/latest/userguide/auth-and-access_identity-based-policies.html#permissions_grant-get-secret-value-to-one-secret
resource "aws_iam_policy" "secretmanager-policy" {
  name        = local.names.secretmanage_policy
  description = "allow to read secretmanager secrets ${var.environment}"
  policy      = file("modules/kubernetes/policies/secretmanager.json")
}

#
# Attache the policy to k8s worker role
#
resource "aws_iam_role_policy_attachment" "attach" {
  role       = module.eks-cluster.worker_iam_role_name
  policy_arn = aws_iam_policy.secretmanager-policy.arn
}

#
# Attache the S3 Policy to Workers
# So we can use aws commands inside pods easily if/when needed
#
resource "aws_iam_role_policy_attachment" "attach-s3" {
  role       = module.eks-cluster.worker_iam_role_name
  policy_arn = "arn:aws:iam::aws:policy/AmazonS3FullAccess"
}

解决方法

此修复程序的所有功劳归于在cloudposse松弛频道上提及此问题的功劳:

地形状态替换提供者-自动批准--/ null Registry.terraform.io/hashicorp/null

这将我的问题与此错误修复为下一个错误。一切都是为了在terraform上升级版本。

,

对于我们来说,我们更新了以下代码中使用的所有提供商URL:

terraform state replace-provider 'registry.terraform.io/-/null' 
'registry.terraform.io/hashicorp/null'
terraform state replace-provider 'registry.terraform.io/-/archive' 
'registry.terraform.io/hashicorp/archive'
terraform state replace-provider 'registry.terraform.io/-/aws' 
'registry.terraform.io/hashicorp/aws'

我想对替换进行非常具体的介绍,所以我在替换新URL时使用了损坏的URL。

更具体地说,这仅适用于 terraform 13 [1]。

[1] https://www.terraform.io/upgrade-guides/0-13.html#explicit-provider-source-locations

[2] https://www.terraform.io/docs/providers/index.html#providers-in-the-terraform-registry

,

当处于最新Terraform状态的对象不再处于配置中但Terraform无法销毁它(通常会期望)时会出现此错误,因为这样做的提供程序配置也不存在。 / p>

解决方案:

仅当您最近删除了对象后,此情况才会出现 “ data.null_data_source”以及提供程序“ null”块。至 继续执行此you’ll need to temporarily restore that provider "null" block,运行terraform apply以使Terraform destroy object data "null_data_source",然后可以删除提供程序“ null” 封锁,因为不再需要它。

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。