jst*_*tim 0 kubernetes amazon-eks
我正在EKS上运行作业。尝试使用无效的Yaml开始工作后,似乎并没有放弃不良的Yaml,即使在更正文件后也一直给我同样的错误消息。
env
节中添加了一个带有布尔值的环境变量,这引发了该错误:
Error from server (BadRequest): error when creating "k8s/jobs/create_csv.yaml": Job in version "v1" cannot be handled as a Job: v1.Job: Spec: v1.JobSpec: Template: v1.PodTemplateSpec: Spec: v1.PodSpec: Containers: []v1.Container: v1.Container: Env: []v1.EnvVar: v1.EnvVar: Value: ReadString: expects " or n, but found t, error found in #10 byte of ...|,"value":true},{"nam|..., bigger context ...|oduction"},{"name":"RAILS_LOG_TO_STDOUT","value":true},{"name":"AWS_REGION","value":"us-east-1"},{"n|...
yes
,但错误消息继续显示原始的,错误的Yaml。kubectl get jobs --all-namespaces
我以为这可能是因为我没有imagePullPolicy
设置为Always
,但是即使我在kubectl
本地运行命令也会发生这种情况。
以下是我的工作定义文件:
apiVersion: batch/v1
kind: Job
metadata:
generateName: create-csv-
labels:
transformer: AR
spec:
template:
spec:
containers:
- name: create-csv
image: my-image:latest
imagePullPolicy: Always
command: ["bin/rails", "create_csv"]
env:
- name: RAILS_ENV
value: production
- name: RAILS_LOG_TO_STDOUT
value: yes
- name: AWS_REGION
value: us-east-1
- name: AWS_ACCESS_KEY_ID
valueFrom:
secretKeyRef:
name: aws
key: aws_access_key_id
- name: AWS_SECRET_ACCESS_KEY
valueFrom:
secretKeyRef:
name: aws
key: aws_secret_access_key
restartPolicy: OnFailure
backoffLimit: 6
Run Code Online (Sandbox Code Playgroud)
归档时间: |
|
查看次数: |
786 次 |
最近记录: |