Dmi*_*sky 0 google-cloud-platform kubernetes google-kubernetes-engine
我有一个带有大小为 2 的单节点池的 GKE 集群。当我添加第三个节点时,没有任何 Pod 分发到第三个节点。
这是原始的 2 节点节点池:
$ kubectl get node
NAME STATUS ROLES AGE VERSION
gke-cluster0-pool-d59e9506-b7nb Ready <none> 13m v1.8.3-gke.0
gke-cluster0-pool-d59e9506-vp6t Ready <none> 18m v1.8.3-gke.0
Run Code Online (Sandbox Code Playgroud)
这是在原始节点池上运行的 pod:
$ kubectl get po -o wide --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE
default attachment-proxy-659bdc84d-ckdq9 1/1 Running 0 10m 10.0.38.3 gke-cluster0-pool-d59e9506-vp6t
default elasticsearch-0 1/1 Running 0 4m 10.0.39.11 gke-cluster0-pool-d59e9506-b7nb
default front-webapp-646bc49675-86jj6 1/1 Running 0 10m 10.0.38.10 gke-cluster0-pool-d59e9506-vp6t
default kafka-0 1/1 Running 3 4m 10.0.39.9 gke-cluster0-pool-d59e9506-b7nb
default mailgun-http-98f8d997c-hhfdc 1/1 Running 0 4m 10.0.38.17 gke-cluster0-pool-d59e9506-vp6t
default stamps-5b6fc489bc-6xtqz 2/2 Running 3 10m 10.0.38.13 gke-cluster0-pool-d59e9506-vp6t
default user-elasticsearch-6b6dd7fc8-b55xx 1/1 Running 0 10m 10.0.38.4 gke-cluster0-pool-d59e9506-vp6t
default user-http-analytics-6bdd49bd98-p5pd5 1/1 Running 0 4m 10.0.39.8 gke-cluster0-pool-d59e9506-b7nb
default user-http-graphql-67884c678c-7dcdq 1/1 Running 0 4m 10.0.39.7 gke-cluster0-pool-d59e9506-b7nb
default user-service-5cbb8cfb4f-t6zhv 1/1 Running 0 4m 10.0.38.15 gke-cluster0-pool-d59e9506-vp6t
default user-streams-0 1/1 Running 0 4m 10.0.39.10 gke-cluster0-pool-d59e9506-b7nb
default user-streams-elasticsearch-c64b64d6f-2nrtl 1/1 Running 3 10m 10.0.38.6 gke-cluster0-pool-d59e9506-vp6t
default zookeeper-0 1/1 Running 0 4m 10.0.39.12 gke-cluster0-pool-d59e9506-b7nb
kube-lego kube-lego-7799f6b457-skkrc 1/1 Running 0 10m 10.0.38.5 gke-cluster0-pool-d59e9506-vp6t
kube-system event-exporter-v0.1.7-7cb7c5d4bf-vr52v 2/2 Running 0 10m 10.0.38.7 gke-cluster0-pool-d59e9506-vp6t
kube-system fluentd-gcp-v2.0.9-648rh 2/2 Running 0 14m 10.0.38.2 gke-cluster0-pool-d59e9506-vp6t
kube-system fluentd-gcp-v2.0.9-fqjz6 2/2 Running 0 9m 10.0.39.2 gke-cluster0-pool-d59e9506-b7nb
kube-system heapster-v1.4.3-6fc45b6cc4-8cl72 3/3 Running 0 4m 10.0.39.6 gke-cluster0-pool-d59e9506-b7nb
kube-system k8s-snapshots-5699c68696-h8r75 1/1 Running 0 4m 10.0.38.16 gke-cluster0-pool-d59e9506-vp6t
kube-system kube-dns-778977457c-b48w5 3/3 Running 0 4m 10.0.39.5 gke-cluster0-pool-d59e9506-b7nb
kube-system kube-dns-778977457c-sw672 3/3 Running 0 10m 10.0.38.9 gke-cluster0-pool-d59e9506-vp6t
kube-system kube-dns-autoscaler-7db47cb9b7-tjt4l 1/1 Running 0 10m 10.0.38.11 gke-cluster0-pool-d59e9506-vp6t
kube-system kube-proxy-gke-cluster0-pool-d59e9506-b7nb 1/1 Running 0 9m 10.128.0.4 gke-cluster0-pool-d59e9506-b7nb
kube-system kube-proxy-gke-cluster0-pool-d59e9506-vp6t 1/1 Running 0 14m 10.128.0.2 gke-cluster0-pool-d59e9506-vp6t
kube-system kubernetes-dashboard-76c679977c-mwqlv 1/1 Running 0 10m 10.0.38.8 gke-cluster0-pool-d59e9506-vp6t
kube-system l7-default-backend-6497bcdb4d-wkx28 1/1 Running 0 10m 10.0.38.12 gke-cluster0-pool-d59e9506-vp6t
kube-system nginx-ingress-controller-78d546664f-gf6mx 1/1 Running 0 4m 10.0.39.3 gke-cluster0-pool-d59e9506-b7nb
kube-system tiller-deploy-5458cb4cc-26x26 1/1 Running 0 4m 10.0.39.4 gke-cluster0-pool-d59e9506-b7nb
Run Code Online (Sandbox Code Playgroud)
然后我将另一个节点添加到节点池中:
gcloud container clusters resize cluster0 --node-pool pool --size 3
Run Code Online (Sandbox Code Playgroud)
第三个已添加并准备就绪:
NAME STATUS ROLES AGE VERSION
gke-cluster0-pool-d59e9506-1rzm Ready <none> 3m v1.8.3-gke.0
gke-cluster0-pool-d59e9506-b7nb Ready <none> 14m v1.8.3-gke.0
gke-cluster0-pool-d59e9506-vp6t Ready <none> 19m v1.8.3-gke.0
Run Code Online (Sandbox Code Playgroud)
但是,除了属于的 pod 之外,没有任何 podDaemonSet被调度到添加的节点上:
$ kubectl get po -o wide --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE
default attachment-proxy-659bdc84d-ckdq9 1/1 Running 0 17m 10.0.38.3 gke-cluster0-pool-d59e9506-vp6t
default elasticsearch-0 1/1 Running 0 10m 10.0.39.11 gke-cluster0-pool-d59e9506-b7nb
default front-webapp-646bc49675-86jj6 1/1 Running 0 17m 10.0.38.10 gke-cluster0-pool-d59e9506-vp6t
default kafka-0 1/1 Running 3 11m 10.0.39.9 gke-cluster0-pool-d59e9506-b7nb
default mailgun-http-98f8d997c-hhfdc 1/1 Running 0 10m 10.0.38.17 gke-cluster0-pool-d59e9506-vp6t
default stamps-5b6fc489bc-6xtqz 2/2 Running 3 16m 10.0.38.13 gke-cluster0-pool-d59e9506-vp6t
default user-elasticsearch-6b6dd7fc8-b55xx 1/1 Running 0 17m 10.0.38.4 gke-cluster0-pool-d59e9506-vp6t
default user-http-analytics-6bdd49bd98-p5pd5 1/1 Running 0 10m 10.0.39.8 gke-cluster0-pool-d59e9506-b7nb
default user-http-graphql-67884c678c-7dcdq 1/1 Running 0 10m 10.0.39.7 gke-cluster0-pool-d59e9506-b7nb
default user-service-5cbb8cfb4f-t6zhv 1/1 Running 0 10m 10.0.38.15 gke-cluster0-pool-d59e9506-vp6t
default user-streams-0 1/1 Running 0 10m 10.0.39.10 gke-cluster0-pool-d59e9506-b7nb
default user-streams-elasticsearch-c64b64d6f-2nrtl 1/1 Running 3 17m 10.0.38.6 gke-cluster0-pool-d59e9506-vp6t
default zookeeper-0 1/1 Running 0 10m 10.0.39.12 gke-cluster0-pool-d59e9506-b7nb
kube-lego kube-lego-7799f6b457-skkrc 1/1 Running 0 17m 10.0.38.5 gke-cluster0-pool-d59e9506-vp6t
kube-system event-exporter-v0.1.7-7cb7c5d4bf-vr52v 2/2 Running 0 17m 10.0.38.7 gke-cluster0-pool-d59e9506-vp6t
kube-system fluentd-gcp-v2.0.9-648rh 2/2 Running 0 20m 10.0.38.2 gke-cluster0-pool-d59e9506-vp6t
kube-system fluentd-gcp-v2.0.9-8tb4n 2/2 Running 0 4m 10.0.40.2 gke-cluster0-pool-d59e9506-1rzm
kube-system fluentd-gcp-v2.0.9-fqjz6 2/2 Running 0 15m 10.0.39.2 gke-cluster0-pool-d59e9506-b7nb
kube-system heapster-v1.4.3-6fc45b6cc4-8cl72 3/3 Running 0 11m 10.0.39.6 gke-cluster0-pool-d59e9506-b7nb
kube-system k8s-snapshots-5699c68696-h8r75 1/1 Running 0 10m 10.0.38.16 gke-cluster0-pool-d59e9506-vp6t
kube-system kube-dns-778977457c-b48w5 3/3 Running 0 11m 10.0.39.5 gke-cluster0-pool-d59e9506-b7nb
kube-system kube-dns-778977457c-sw672 3/3 Running 0 17m 10.0.38.9 gke-cluster0-pool-d59e9506-vp6t
kube-system kube-dns-autoscaler-7db47cb9b7-tjt4l 1/1 Running 0 17m 10.0.38.11 gke-cluster0-pool-d59e9506-vp6t
kube-system kube-proxy-gke-cluster0-pool-d59e9506-1rzm 1/1 Running 0 4m 10.128.0.3 gke-cluster0-pool-d59e9506-1rzm
kube-system kube-proxy-gke-cluster0-pool-d59e9506-b7nb 1/1 Running 0 15m 10.128.0.4 gke-cluster0-pool-d59e9506-b7nb
kube-system kube-proxy-gke-cluster0-pool-d59e9506-vp6t 1/1 Running 0 20m 10.128.0.2 gke-cluster0-pool-d59e9506-vp6t
kube-system kubernetes-dashboard-76c679977c-mwqlv 1/1 Running 0 17m 10.0.38.8 gke-cluster0-pool-d59e9506-vp6t
kube-system l7-default-backend-6497bcdb4d-wkx28 1/1 Running 0 17m 10.0.38.12 gke-cluster0-pool-d59e9506-vp6t
kube-system nginx-ingress-controller-78d546664f-gf6mx 1/1 Running 0 11m 10.0.39.3 gke-cluster0-pool-d59e9506-b7nb
kube-system tiller-deploy-5458cb4cc-26x26 1/1 Running 0 11m 10.0.39.4 gke-cluster0-pool-d59e9506-b7nb
Run Code Online (Sandbox Code Playgroud)
到底是怎么回事?为什么 Pod 没有传播到添加的节点上?我原以为 pod 会分发到第三个节点。我怎样才能让工作负载传播到这个第三个节点?
从技术上讲,就清单资源请求而言,我的整个应用程序适合一个节点。但是当添加第二个节点时,应用程序被分发到第二个节点。所以我认为当我添加第三个节点时,pod 也会被调度到该节点上。但这不是我所看到的。只有DaemonSets 被调度到第三个节点上。我试过扩大和缩小节点池无济于事。
更新
两个抢占式节点重新启动,现在所有 Pod 都在一个节点上。这是怎么回事?增加资源请求是使它们分散的唯一方法吗?
这是预期的行为。新的 Pod 将被安排到空节点上,但运行中的 Pod 不会自动移动。kubernetes 调度器通常对重新调度 Pod 持保守态度,因此它不会无缘无故地这样做。Pod 可以是有状态的(如 db),因此 kubernetes 不想杀死和重新调度 Pod。
有一个正在开发的项目可以满足您的需求:https : //github.com/kubernetes-incubator/descheduler 我没有使用过它,但它正在社区积极开发中。
| 归档时间: |
|
| 查看次数: |
698 次 |
| 最近记录: |