如何将值从terraform传递到Helm图表values.yaml文件?

Bob*_*786 6 amazon-web-services terraform kubernetes-helm terraform-provider-aws terraform-provider-helm

我正在使用 Terraform 中的 Helm 图表创建 ingress-nginx 控制器。我确实有一个values.yaml 文件,可以在其中添加自定义信息,但我需要从 Terraform 资源传递 SSL 证书值,那么我该怎么做呢?我正在使用下面的代码,但出现错误。

\n
resource "aws_acm_certificate" "ui_cert" {\n  domain_name       = var.DOMAIN_NAME\n  validation_method = "DNS"\n\n  tags = {\n    Environment = var.ENVIRONMENT \n  }\n\n  lifecycle {\n    create_before_destroy = true\n  }\n\n}\n\n
Run Code Online (Sandbox Code Playgroud)\n
resource "helm_release" "nginix_ingress" {\n\n  depends_on = [module.eks, kubernetes_namespace.nginix_ingress,aws_acm_certificate.ui_cert]\n\n  name       = "ingress-nginx"\n  repository = "https://kubernetes.github.io/ingress-nginx"\n  chart      = "ingress-nginx"\n  namespace  = var.NGINX_INGRESS_NAMESPACE\n   \n  values = [templatefile("values.yaml", {\n    controller.service.beta.kubernetes.io/aws-load-balancer-internal = aws_acm_certificate.ui_cert.name,\n  })]\n}\n
Run Code Online (Sandbox Code Playgroud)\n

我收到以下错误:

\n
 Error: Reference to undeclared resource\n\xe2\x94\x82\n\xe2\x94\x82   on ui.tf line 27, in resource "helm_release" "nginix_ingress":\n\xe2\x94\x82   27:     controller.service.beta.kubernetes.io/aws-load-balancer-internal = aws_acm_certificate.ui_cert.name,\n\xe2\x94\x82\n\xe2\x94\x82 A managed resource "controller" "service" has not been declared in the root\n\xe2\x94\x82 module.\n\xe2\x95\xb5\n\xe2\x95\xb7\n\xe2\x94\x82 Error: Invalid reference\n\xe2\x94\x82\n\xe2\x94\x82   on ui.tf line 27, in resource "helm_release" "nginix_ingress":\n\xe2\x94\x82   27:     controller.service.beta.kubernetes.io/aws-load-balancer-internal = aws_acm_certificate.ui_cert.name,\n\xe2\x94\x82\n\xe2\x94\x82 A reference to a resource type must be followed by at least one attribute\n\xe2\x94\x82 access, specifying the resource name.\n\xe2\x95\xb5\n\xe2\x95\xb7\n\xe2\x94\x82 Error: Unsupported attribute\n\xe2\x94\x82\n\xe2\x94\x82   on ui.tf line 27, in resource "helm_release" "nginix_ingress":\n\xe2\x94\x82   27:     controller.service.beta.kubernetes.io/aws-load-balancer-internal = aws_acm_certificate.ui_cert.name,\n\xe2\x94\x82\n\xe2\x94\x82 This object has no argument, nested block, or exported attribute named\n\xe2\x94\x82 "name".\n\xe2\x95\xb5\n\xe2\x95\xb7\n\xe2\x94\x82 Error: Reference to undeclared resource\n\xe2\x94\x82\n\xe2\x94\x82   on ui.tf line 27, in resource "helm_release" "nginix_ingress":\n\xe2\x94\x82   27:     controller.service.beta.kubernetes.io/aws-load-balancer-internal = aws_acm_certificate.ui_cert.name,\n\xe2\x94\x82\n\xe2\x94\x82 A managed resource "controller" "service" has not been declared in the root\n\xe2\x94\x82 module.\n\xe2\x95\xb5\n\xe2\x95\xb7\n\xe2\x94\x82 Error: Invalid reference\n\xe2\x94\x82\n\xe2\x94\x82   on ui.tf line 27, in resource "helm_release" "nginix_ingress":\n\xe2\x94\x82   27:     controller.service.beta.kubernetes.io/aws-load-balancer-internal = aws_acm_certificate.ui_cert.name,\n\xe2\x94\x82\n\xe2\x94\x82 A reference to a resource type must be followed by at least one attribute\n\xe2\x94\x82 access, specifying the resource name.\n\xe2\x95\xb5\n\xe2\x95\xb7\n\xe2\x94\x82 Error: Unsupported attribute\n\xe2\x94\x82\n\xe2\x94\x82   on ui.tf line 27, in resource "helm_release" "nginix_ingress":\n\xe2\x94\x82   27:     controller.service.beta.kubernetes.io/aws-load-balancer-internal = aws_acm_certificate.ui_cert.name,\n\xe2\x94\x82\n\xe2\x94\x82 This object has no argument, nested block, or exported attribute named\n\xe2\x94\x82 "name".\n
Run Code Online (Sandbox Code Playgroud)\n
controller:\n  name: controller\n  image:\n    chroot: false\n    registry: registry.k8s.io\n    image: ingress-nginx/controller\n    tag: "v1.3.0"\n    digest: sha256:d1707ca76d3b044ab8a28277a2466a02100ee9f58a86af1535a3edf9323ea1b5\n    digestChroot: sha256:0fcb91216a22aae43b374fc2e6a03b8afe9e8c78cbf07a09d75636dc4ea3c191\n    pullPolicy: IfNotPresent\n    runAsUser: 101\n    allowPrivilegeEscalation: true\n  containerName: controller\n  containerPort:\n    # http: 80\n    https: 443\n  config:\n    use-proxy-protocol: "true"\n  # -- Optionally customize the pod hostname.\n  hostname: {}\n\n  # -- Process IngressClass per name (additionally as per spec.controller).\n  ingressClassByName: false\n\n  # -- This configuration defines if Ingress Controller should allow users to set\n  # their own *-snippet annotations, otherwise this is forbidden / dropped\n  # when users add those annotations.\n  # Global snippets in ConfigMap are still respected\n  allowSnippetAnnotations: true\n\n  ## This section refers to the creation of the IngressClass resource\n  ## IngressClass resources are supported since k8s >= 1.18 and required since k8s >= 1.19\n  ingressClassResource:\n    # -- Name of the ingressClass\n    name: nginx\n    # -- Is this ingressClass enabled or not\n    enabled: true\n    # -- Is this the default ingressClass for the cluster\n    default: false\n    # -- Controller-value of the controller that is processing this ingressClass\n    controllerValue: "k8s.io/ingress-nginx"\n\n    # -- Parameters is a link to a custom resource containing additional\n    # configuration for the controller. This is optional if the controller\n    # does not require extra parameters.\n    parameters: {}\n\n  # -- For backwards compatibility with ingress.class annotation, use ingressClass.\n  # Algorithm is as follows, first ingressClassName is considered, if not present, controller looks for ingress.class annotation\n  ingressClass: nginx\n\n  # -- Labels to add to the pod container metadata\n  podLabels: {}\n  #  key: value\n\n  # -- Security Context policies for controller pods\n\n  # -- Allows customization of the source of the IP address or FQDN to report\n  # in the ingress status field. By default, it reads the information provided\n  # by the service. If disable, the status field reports the IP address of the\n  # node or nodes where an ingress controller pod is running.\n  publishService:\n    # -- Enable 'publishService' or not\n    enabled: true\n    # -- Allows overriding of the publish service to bind to\n    # Must be <namespace>/<service_name>\n    pathOverride: ""\n\n  tcp:\n    # -- Allows customization of the tcp-services-configmap; defaults to $(POD_NAMESPACE)\n    configMapNamespace: ""\n    # -- Annotations to be added to the tcp config configmap\n    annotations: {}\n\n  udp:\n    # -- Allows customization of the udp-services-configmap; defaults to $(POD_NAMESPACE)\n    configMapNamespace: ""\n    # -- Annotations to be added to the udp config configmap\n    annotations: {}\n\n  # -- Use a `DaemonSet` or `Deployment`\n  kind: Deployment\n\n  # -- Annotations to be added to the controller Deployment or DaemonSet\n  ##\n  annotations: {}\n  #  keel.sh/pollSchedule: "@every 60m"\n\n  # -- Labels to be added to the controller Deployment or DaemonSet and other resources that do not have option to specify labels\n  ##\n  labels: {}\n  #  keel.sh/policy: patch\n  #  keel.sh/trigger: poll\n\n\n  # -- The update strategy to apply to the Deployment or DaemonSet\n  ##\n  updateStrategy: {}\n  #  rollingUpdate:\n  #    maxUnavailable: 1\n  #  type: RollingUpdate\n\n  # -- `minReadySeconds` to avoid killing pods before we are ready\n  ##\n  minReadySeconds: 0\n\n\n\n  # -- Topology spread constraints rely on node labels to identify the topology domain(s) that each Node is in.\n  ## Ref: https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/\n  ##\n  topologySpreadConstraints: []\n    # - maxSkew: 1\n    #   topologyKey: topology.kubernetes.io/zone\n    #   whenUnsatisfiable: DoNotSchedule\n    #   labelSelector:\n    #     matchLabels:\n    #       app.kubernetes.io/instance: ingress-nginx-internal\n\n  # -- `terminationGracePeriodSeconds` to avoid killing pods before we are ready\n  ## wait up to five minutes for the drain of connections\n  ##\n  terminationGracePeriodSeconds: 300\n\n  # -- Node labels for controller pod assignment\n  ## Ref: https://kubernetes.io/docs/user-guide/node-selection/\n  ##\n  nodeSelector:\n    kubernetes.io/os: linux\n\n  ## Liveness and readiness probe values\n  ## Ref: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes\n  ##\n  ## startupProbe:\n  ##   httpGet:\n  ##     # should match container.healthCheckPath\n  ##     path: "/healthz"\n  ##     port: 10254\n  ##     scheme: HTTP\n  ##   initialDelaySeconds: 5\n  ##   periodSeconds: 5\n  ##   timeoutSeconds: 2\n  ##   successThreshold: 1\n  ##   failureThreshold: 5\n  livenessProbe:\n    httpGet:\n      # should match container.healthCheckPath\n      path: "/healthz"\n      port: 10254\n      scheme: HTTP\n    initialDelaySeconds: 10\n    periodSeconds: 10\n    timeoutSeconds: 1\n    successThreshold: 1\n    failureThreshold: 5\n  readinessProbe:\n    httpGet:\n      # should match container.healthCheckPath\n      path: "/healthz"\n      port: 10254\n      scheme: HTTP\n    initialDelaySeconds: 10\n    periodSeconds: 10\n    timeoutSeconds: 1\n    successThreshold: 1\n    failureThreshold: 3\n\n\n  # -- Path of the health check endpoint. All requests received on the port defined by\n  # the healthz-port parameter are forwarded internally to this path.\n  healthCheckPath: "/healthz"\n\n  # -- Address to bind the health check endpoint.\n  # It is better to set this option to the internal node address\n  # if the ingress nginx controller is running in the `hostNetwork: true` mode.\n  healthCheckHost: ""\n\n  # -- Annotations to be added to controller pods\n  ##\n  podAnnotations: {}\n\n  replicaCount: 1\n\n  minAvailable: 1\n\n  ## Define requests resources to avoid probe issues due to CPU utilization in busy nodes\n  ## ref: https://github.com/kubernetes/ingress-nginx/issues/4735#issuecomment-551204903\n  ## Ideally, there should be no limits.\n  ## https://engineering.indeedblog.com/blog/2019/12/cpu-throttling-regression-fix/\n  resources:\n  ##  limits:\n  ##    cpu: 100m\n  ##    memory: 90Mi\n    requests:\n      cpu: 100m\n      memory: 90Mi\n\n  # Mutually exclusive with keda autoscaling\n  autoscaling:\n    enabled: false\n    minReplicas: 1\n    maxReplicas: 11\n    targetCPUUtilizationPercentage: 50\n    targetMemoryUtilizationPercentage: 50\n    behavior: {}\n      # scaleDown:\n      #   stabilizationWindowSeconds: 300\n      #  policies:\n      #   - type: Pods\n      #     value: 1\n      #     periodSeconds: 180\n      # scaleUp:\n      #   stabilizationWindowSeconds: 300\n      #   policies:\n      #   - type: Pods\n      #     value: 2\n      #     periodSeconds: 60\n\n  autoscalingTemplate: []\n  # Custom or additional autoscaling metrics\n  # ref: https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-custom-metrics\n  # - type: Pods\n  #   pods:\n  #     metric:\n  #       name: nginx_ingress_controller_nginx_process_requests_total\n  #     target:\n  #       type: AverageValue\n  #       averageValue: 10000m\n\n  # Mutually exclusive with hpa autoscaling\n  keda:\n    apiVersion: "keda.sh/v1alpha1"\n    ## apiVersion changes with keda 1.x vs 2.x\n    ## 2.x = keda.sh/v1alpha1\n    ## 1.x = keda.k8s.io/v1alpha1\n    enabled: false\n    minReplicas: 1\n    maxReplicas: 11\n    pollingInterval: 30\n    cooldownPeriod: 300\n    restoreToOriginalReplicaCount: false\n    scaledObject:\n      annotations: {}\n      # Custom annotations for ScaledObject resource\n      #  annotations:\n      # key: value\n    triggers: []\n #     - type: prometheus\n #       metadata:\n #         serverAddress: http://<prometheus-host>:9090\n #         metricName: http_requests_total\n #         threshold: '100'\n #         query: sum(rate(http_requests_total{deployment="my-deployment"}[2m]))\n\n    behavior: {}\n #     scaleDown:\n #       stabilizationWindowSeconds: 300\n #       policies:\n #       - type: Pods\n #         value: 1\n #         periodSeconds: 180\n #     scaleUp:\n #       stabilizationWindowSeconds: 300\n #       policies:\n #       - type: Pods\n #         value: 2\n #         periodSeconds: 60\n\n  # -- Enable mimalloc as a drop-in replacement for malloc.\n  ## ref: https://github.com/microsoft/mimalloc\n  ##\n  enableMimalloc: true\n\n  ## Override NGINX template\n  customTemplate:\n    configMapName: ""\n    configMapKey: ""\n\n  service:\n    enabled: true\n    annotations:\n      service.beta.kubernetes.io/aws-load-balancer-type: "nlb"\n      service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"\n      service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "ssl-cert"\n      service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http"\n      service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"\n
Run Code Online (Sandbox Code Playgroud)\n

有人可以告诉我如何让它发挥作用吗?

\n

Mar*_*o E 11

内置函数templatefile用于将值传递给模板文件中定义的变量。例如,在您的情况下,您将在模板文件中定义一个变量,例如ssl_cert. 然后,在调用该templatefile函数时,您将向其传递 ACM 资源提供的值:

  values = [templatefile("values.yaml", {
    ssl_cert = aws_acm_certificate.ui_cert.name
  })]
Run Code Online (Sandbox Code Playgroud)

ssl_cert内部的变量将templatefile与注释相关联controller.service.beta.kubernetes.io/aws-load-balancer-ssl-cert。根据 YML 文件,应在文件的最后部分添加变量:

  values = [templatefile("values.yaml", {
    ssl_cert = aws_acm_certificate.ui_cert.name
  })]
Run Code Online (Sandbox Code Playgroud)

templatefile函数确实很强大,但我强烈建议在使用它时了解变量和变量替换的工作原理[1]。除非您在调用函数时和模板文件中都有占位符变量,否则它不会知道您想要进行的替换。


[1] https://www.terraform.io/language/functions/templatefile