无法删除Kubernetes中节点的基础VM

Pet*_*o M 5 google-compute-engine kubernetes google-kubernetes-engine

我正在GCE上运行一个三节点群集。我想耗尽一个节点并删除基础虚拟机。

kubectl drain命令的文档说:

Once it returns (without giving an error), you can power down the node (or equivalently, if on a cloud platform, delete the virtual machine backing the node)

我执行以下命令:

  1. 获取节点

    $ kl get nodes
    NAME                                      STATUS    AGE
    gke-jcluster-default-pool-9cc4e660-6q21   Ready     43m
    gke-jcluster-default-pool-9cc4e660-rx9p   Ready     6m
    gke-jcluster-default-pool-9cc4e660-xr4z   Ready     23h
    
    Run Code Online (Sandbox Code Playgroud)
  2. 排水节点rx9p

    $ kl drain gke-jcluster-default-pool-9cc4e660-rx9p --force
    node "gke-jcluster-default-pool-9cc4e660-rx9p" cordoned
    WARNING: Deleting pods not managed by ReplicationController, ReplicaSet, Job, DaemonSet or StatefulSet: fluentd-cloud-logging-gke-jcluster-default-pool-9cc4e660-rx9p, kube-proxy-gke-jcluster-default-pool-9cc4e660-rx9p
    node "gke-jcluster-default-pool-9cc4e660-rx9p" drained
    
    Run Code Online (Sandbox Code Playgroud)
  3. 删除gcloud VM。

     $ gcloud compute instances delete gke-jcluster-default-pool-9cc4e660-rx9p
    
    Run Code Online (Sandbox Code Playgroud)
  4. 列出虚拟机。

     $ gcloud compute instances list
    
    Run Code Online (Sandbox Code Playgroud)

    结果,我看到在-上方删除的VM rx9p。如果这样做kubectl get nodes,我也会看到rx9p节点。

这是怎么回事?是否正在重新启动要删除的VM?我必须在命令之间等待一些超时吗?

Jan*_*art 5

您处在正确的位置,首先要耗尽节点。

节点(计算实例)是托管实例组的一部分。如果仅使用gcloud compute instances delete命令删除它们,则托管实例组将重新创建它们。

要正确删除一个,请使用此命令(将其耗尽后!):

gcloud compute instance-groups managed delete-instances \
  gke-jcluster-default-pool-9cc4e660-grp \
  --instances=gke-jcluster-default-pool-9cc4e660-rx9p \
  --zone=...
Run Code Online (Sandbox Code Playgroud)