将节点标签设置为 pod 环境变量

Jon*_*nas 6 kubernetes

如何将Node标签设置为Pod环境变量?我需要知道topology.kubernetes.io/zonePod 内的标签值。

ane*_*yte 4

Downward API目前不支持向 Pod/容器公开节点标签。GitHub 上有一个关于此问题的悬而未决的问题,但尚不清楚何时会实施。

这就留下了从 Kubernetes API 获取节点标签的唯一选择,就像kubectl现在一样。实现起来并不容易,特别是如果您希望标签作为环境变量。我将给您一个示例,说明如何使用 , 来完成此操作initContainercurljq如果可能的话,我建议您在应用程序中实现此操作,因为它会更容易、更干净。

要发出标签请求,您需要获得执行此操作的权限。因此,下面的示例创建一个具有get(描述)节点权限的服务帐户。然后, 中的脚本initContainer使用服务帐户发出请求并从 中提取标签json。容器test从文件中读取环境变量并echo读取其中的一个。

例子:

# Create a service account
apiVersion: v1
kind: ServiceAccount
metadata:
  name: describe-nodes
  namespace: <insert-namespace-name-where-the-app-is>
---
# Create a cluster role that allowed to perform describe ("get") over ["nodes"]
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: describe-nodes
rules:
  - apiGroups: [""]
    resources: ["nodes"]
    verbs: ["get"]
---
# Associate the cluster role with the service account
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: describe-nodes
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: describe-nodes
subjects:
- kind: ServiceAccount
  name: describe-nodes
  namespace: <insert-namespace-name-where-the-app-is>
---
# Proof of concept pod
apiVersion: v1
kind: Pod
metadata:
  name: get-node-labels
spec:
  # Service account to get node labels from Kubernetes API
  serviceAccountName: describe-nodes

  # A volume to keep the extracted labels
  volumes:
    - name: node-info
      emptyDir: {}

  initContainers:
    # The container that extracts the labels
    - name: get-node-labels

      # The image needs 'curl' and 'jq' apps in it
      # I used curl image and run it as root to install 'jq'
      # during runtime
      # THIS IS A BAD PRACTICE UNSUITABLE FOR PRODUCTION
      # Make an image where both present.
      image: curlimages/curl
      # Remove securityContext if you have an image with both curl and jq
      securityContext:
        runAsUser: 0

      # It'll put labels here
      volumeMounts:
        - mountPath: /node
          name: node-info

      env:
        # pass node name to the environment
        - name: NODENAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        - name: APISERVER
          value: https://kubernetes.default.svc
        - name: SERVICEACCOUNT
          value: /var/run/secrets/kubernetes.io/serviceaccount
        - name: SCRIPT
          value: |
            set -eo pipefail

            # install jq; you don't need this line if the image has it
            apk add jq

            TOKEN=$(cat ${SERVICEACCOUNT}/token)
            CACERT=${SERVICEACCOUNT}/ca.crt

            # Get node labels into a json
            curl --cacert ${CACERT} \
                 --header "Authorization: Bearer ${TOKEN}" \
                 -X GET ${APISERVER}/api/v1/nodes/${NODENAME} | jq .metadata.labels > /node/labels.json

            # Extract 'topology.kubernetes.io/zone' from json
            NODE_ZONE=$(jq '."topology.kubernetes.io/zone"' -r /node/labels.json)
            # and save it into a file in the format suitable for sourcing
            echo "export NODE_ZONE=${NODE_ZONE}" > /node/zone
      command: ["/bin/ash", "-c"]
      args:
        - 'echo "$$SCRIPT" > /tmp/script && ash /tmp/script'

  containers:
    # A container that needs the label value
    - name: test
      image: debian:buster
      command: ["/bin/bash", "-c"]
      # source ENV variable from file, echo NODE_ZONE, and keep running doing nothing
      args: ["source /node/zone && echo $$NODE_ZONE && cat /dev/stdout"]
      volumeMounts:
        - mountPath: /node
          name: node-info
Run Code Online (Sandbox Code Playgroud)