Ama*_*eep 10 kubernetes google-kubernetes-engine
在kubernetes的帮助下,我在GKE上运行日常工作.每天基于kubernetes中配置的cron,一个新容器旋转并尝试将一些数据插入到BigQuery中.
我们拥有的设置是我们在GCP中有两个不同的项目在一个项目中我们在其他项目中维护BigQuery中的数据我们所有的GKE都在运行所以当GKE必须与不同的项目资源交互时我的猜测是我必须设置一个环境名称为GOOGLE_APPLICATION_CREDENTIALS的变量,指向服务帐户json文件,但由于每天kubernetes正在启动一个新容器,我不知道应该如何以及在何处设置此变量.
提前致谢!
---
apiVersion: v1
kind: Secret
metadata:
name: my-data-service-account-credentials
type: Opaque
data:
sa_json: "bas64JsonServiceAccount"
---
apiVersion: v1
kind: Pod
metadata:
name: adtech-ads-apidata-el-adunit-pod
spec:
containers:
- name: adtech-ads-apidata-el-adunit-container
volumeMounts:
- name: service-account-credentials-volume
mountPath: "/etc/gcp"
readOnly: true
volumes:
- name: service-account-credentials-volume
secret:
secretName: my-data-service-account-credentials
items:
- key: sa_json
path: sa_credentials.json
Run Code Online (Sandbox Code Playgroud)
apiVersion: batch/v2alpha1
kind: CronJob
metadata:
name: adtech-ads-apidata-el-adunit
spec:
schedule: "*/5 * * * *"
suspend: false
concurrencyPolicy: Replace
successfulJobsHistoryLimit: 10
failedJobsHistoryLimit: 10
jobTemplate:
spec:
template:
spec:
containers:
- name: adtech-ads-apidata-el-adunit-container
image: {{.image}}
args:
- -cp
- opt/nyt/DFPDataIngestion-1.0-jar-with-dependencies.jar
- com.nyt.cron.AdUnitJob
env:
- name: ENV_APP_NAME
value: "{{.env_app_name}}"
- name: ENV_APP_CONTEXT_NAME
value: "{{.env_app_context_name}}"
- name: ENV_GOOGLE_PROJECTID
value: "{{.env_google_projectId}}"
- name: ENV_GOOGLE_DATASETID
value: "{{.env_google_datasetId}}"
- name: ENV_REPORTING_DATASETID
value: "{{.env_reporting_datasetId}}"
- name: ENV_ADBRIDGE_DATASETID
value: "{{.env_adbridge_datasetId}}"
- name: ENV_SALESFORCE_DATASETID
value: "{{.env_salesforce_datasetId}}"
- name: ENV_CLOUD_PLATFORM_URL
value: "{{.env_cloud_platform_url}}"
- name: ENV_SMTP_HOST
value: "{{.env_smtp_host}}"
- name: ENV_TO_EMAIL
value: "{{.env_to_email}}"
- name: ENV_FROM_EMAIL
value: "{{.env_from_email}}"
- name: ENV_AWS_USERNAME
value: "{{.env_aws_username}}"
- name: ENV_CLIENT_ID
value: "{{.env_client_id}}"
- name: ENV_REFRESH_TOKEN
value: "{{.env_refresh_token}}"
- name: ENV_NETWORK_CODE
value: "{{.env_network_code}}"
- name: ENV_APPLICATION_NAME
value: "{{.env_application_name}}"
- name: ENV_SALESFORCE_USERNAME
value: "{{.env_salesforce_username}}"
- name: ENV_SALESFORCE_URL
value: "{{.env_salesforce_url}}"
- name: GOOGLE_APPLICATION_CREDENTIALS
value: "/etc/gcp/sa_credentials.json"
- name: ENV_CLOUD_SQL_URL
valueFrom:
secretKeyRef:
name: secrets
key: cloud_sql_url
- name: ENV_AWS_PASSWORD
valueFrom:
secretKeyRef:
name: secrets
key: aws_password
- name: ENV_CLIENT_SECRET
valueFrom:
secretKeyRef:
name: secrets
key: dfp_client_secret
- name: ENV_SALESFORCE_PASSWORD
valueFrom:
secretKeyRef:
name: secrets
key: salesforce_password
restartPolicy: OnFailure
Run Code Online (Sandbox Code Playgroud)
小智 20
因此,如果您的GKE项目是项目my-gke,并且包含您的GKE容器需要访问的服务/事物的项目是项目my-data,则一种方法是:
my-data项目中创建服务帐户.给它任何GCP角色/权限都需要(例如,roles/bigquery.
dataViewer如果你有一些my-gkeGKE容器需要读取的BigQuery表).
.json包含SA凭据的文件.为这些服务帐户凭据创建Kubernetes秘密资源.它可能看起来像这样:
apiVersion: v1
kind: Secret
metadata:
name: my-data-service-account-credentials
type: Opaque
data:
sa_json: <contents of running 'base64 the-downloaded-SA-credentials.json'>
Run Code Online (Sandbox Code Playgroud)将凭据挂载到需要访问的容器中:
[...]
spec:
containers:
- name: my-container
volumeMounts:
- name: service-account-credentials-volume
mountPath: /etc/gcp
readOnly: true
[...]
volumes:
- name: service-account-credentials-volume
secret:
secretName: my-data-service-account-credentials
items:
- key: sa_json
path: sa_credentials.json
Run Code Online (Sandbox Code Playgroud)GOOGLE_APPLICATION_CREDENTIALS在容器中设置环境变量以指向已装入凭据的路径:
[...]
spec:
containers:
- name: my-container
env:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /etc/gcp/sa_credentials.json
Run Code Online (Sandbox Code Playgroud)这样,任何官方GCP客户端(例如GCP Python客户端,GCP Java客户端,gcloud CLI等)都应该尊重GOOGLE_APPLICATION_CREDENTIALSenv var,并且在发出API请求时,会自动使用my-data您创建并安装的服务帐户的凭据.凭证.json文件.
| 归档时间: |
|
| 查看次数: |
2678 次 |
| 最近记录: |