如何删除完整的kubernetes pod?

Pay*_*ian 2 bash kubernetes

回答我自己的问题

我在Kubernetes中有一堆Pod,它们已经完成(成功或失败),并且我想清理的输出kubectl get pods。这是我跑步时看到的kubectl get pods

NAME                                           READY   STATUS             RESTARTS   AGE
intent-insights-aws-org-73-ingest-391c9384     0/1     ImagePullBackOff   0          8d
intent-postgres-f6dfcddcc-5qwl7                1/1     Running            0          23h
redis-scheduler-dev-master-0                   1/1     Running            0          10h
redis-scheduler-dev-metrics-85b45bbcc7-ch24g   1/1     Running            0          6d
redis-scheduler-dev-slave-74c7cbb557-dmvfg     1/1     Running            0          10h
redis-scheduler-dev-slave-74c7cbb557-jhqwx     1/1     Running            0          5d
scheduler-5f48b845b6-d5p4s                     2/2     Running            0          36m
snapshot-169-5af87b54                          0/1     Completed          0          20m
snapshot-169-8705f77c                          0/1     Completed          0          1h
snapshot-169-be6f4774                          0/1     Completed          0          1h
snapshot-169-ce9a8946                          0/1     Completed          0          1h
snapshot-169-d3099b06                          0/1     ImagePullBackOff   0          24m
snapshot-204-50714c88                          0/1     Completed          0          21m
snapshot-204-7c86df5a                          0/1     Completed          0          1h
snapshot-204-87f35e36                          0/1     ImagePullBackOff   0          26m
snapshot-204-b3a4c292                          0/1     Completed          0          1h
snapshot-204-c3d90db6                          0/1     Completed          0          1h
snapshot-245-3c9a7226                          0/1     ImagePullBackOff   0          28m
snapshot-245-45a907a0                          0/1     Completed          0          21m
snapshot-245-71911b06                          0/1     Completed          0          1h
snapshot-245-a8f5dd5e                          0/1     Completed          0          1h
snapshot-245-b9132236                          0/1     Completed          0          1h
snapshot-76-1e515338                           0/1     Completed          0          22m
snapshot-76-4a7d9a30                           0/1     Completed          0          1h
snapshot-76-9e168c9e                           0/1     Completed          0          1h
snapshot-76-ae510372                           0/1     Completed          0          1h
snapshot-76-f166eb18                           0/1     ImagePullBackOff   0          30m
train-169-65f88cec                             0/1     Error              0          20m
train-169-9c92f72a                             0/1     Error              0          1h
train-169-c935fc84                             0/1     Error              0          1h
train-169-d9593f80                             0/1     Error              0          1h
train-204-70729e42                             0/1     Error              0          20m
train-204-9203be3e                             0/1     Error              0          1h
train-204-d3f2337c                             0/1     Error              0          1h
train-204-e41a3e88                             0/1     Error              0          1h
train-245-7b65d1f2                             0/1     Error              0          19m
train-245-a7510d5a                             0/1     Error              0          1h
train-245-debf763e                             0/1     Error              0          1h
train-245-eec1908e                             0/1     Error              0          1h
train-76-86381784                              0/1     Completed          0          19m
train-76-b1fdc202                              0/1     Error              0          1h
train-76-e972af06                              0/1     Error              0          1h
train-76-f993c8d8                              0/1     Completed          0          1h
webserver-7fc9c69f4d-mnrjj                     2/2     Running            0          36m
worker-6997bf76bd-kvjx4                        2/2     Running            0          25m
worker-6997bf76bd-prxbg                        2/2     Running            0          36m
Run Code Online (Sandbox Code Playgroud)

而且我想摆脱像这样的豆荚train-204-d3f2337c。我怎样才能做到这一点?

Ars*_*kov 20

如果这个 pods 是由 CronJob 创建的,你可以使用spec.failedJobsHistoryLimitspec.successfulJobsHistoryLimit

例子:

apiVersion: batch/v1beta1
kind: CronJob
metadata:
  name: my-cron-job
spec:
  schedule: "*/10 * * * *"
  failedJobsHistoryLimit: 1
  successfulJobsHistoryLimit: 3
  jobTemplate:
    spec:
      template:
         ...
Run Code Online (Sandbox Code Playgroud)

  • 每项工作都在创建豆荚 - 我猜他正在谈论这个(与我的案例相关,谢谢) (5认同)
  • 他说的是 pod 列表而不是工作。 (2认同)

Jav*_*nas 12

如果您想删除未运行的 Pod,可以使用一个命令来完成

kubectl get pods --field-selector=status.phase!=Running
Run Code Online (Sandbox Code Playgroud)

更新了删除 pod 的命令

kubectl delete pods --field-selector=status.phase!=Running

Run Code Online (Sandbox Code Playgroud)


pji*_*ncz 9

现在,您可以轻松一点。

您可以通过以下方式列出所有完成的吊舱:

kubectl get pod --field-selector=status.phase==Succeeded
Run Code Online (Sandbox Code Playgroud)

并通过以下方式删除所有已完成的广告连播:

kubectl delete pod --field-selector=status.phase==Succeeded
Run Code Online (Sandbox Code Playgroud)

  • 是否有使用 Kubernetes Python API 的等效方法? (2认同)

Luk*_*ski 7

你可以通过两种方式做到这一点。

$ kubectl delete pod $(kubectl get pods | grep Completed | awk '{print $1}')
Run Code Online (Sandbox Code Playgroud)

或者

$ kubectl get pods | grep Completed | awk '{print $1}' | xargs kubectl delete pod
Run Code Online (Sandbox Code Playgroud)

两种解决方案都可以胜任。

  • 只需将“-n yournamesapce”添加到两个“kubectl”命令中即可。 (2认同)

Tom*_*ert 6

正如前面的答案提到的,您可以使用以下命令:

kubectl delete pod --field-selector=status.phase=={{phase}}
Run Code Online (Sandbox Code Playgroud)

要删除某个“阶段”中的 pod,仍然缺少对存在哪些阶段的快速摘要,因此“pod 阶段”的有效值为:

待处理、正在运行、成功、失败、未知

在这种特定情况下删除“错误”pod:

kubectl delete pod --field-selector=status.phase==Failed
Run Code Online (Sandbox Code Playgroud)