Vow*_*eee 3 kubernetes azure-aks
最近,我们在 AKS 集群中遇到了一些问题,即节点内存随着 pod 内存请求较高(请求 2Gi、内存 2Gi)而增加,从而增加了节点数量。因此,为了减少节点数量,我们将请求内存减少到 256MI 并限制为相同值 (2GB)。之后我们注意到集群中出现了一些奇怪的行为。
Run Code Online (Sandbox Code Playgroud)Resource Requests Limits -------- -------- ------ cpu 1895m (99%) 11450m (602%) memory 3971Mi (86%) 21830Mi (478%) ephemeral-storage 0 (0%) 0 (0%) hugepages-1Gi 0 (0%) 0 (0%) hugepages-2Mi 0 (0%) 0 (0%) attachable-volumes-azure-disk 0 0
Run Code Online (Sandbox Code Playgroud)NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% aks-nodepoolx-xxxxxxxx-vmss00000x 151m 7% 5318Mi 116%
NAME READY STATUS RESTARTS AGE
mymobile-mobile-xxxxx-ddvd6 2/2 Running 0 151m
myappsvc-xxxxxxxxxx-2t6gz 2/2 Running 0 5h3m
myappsvc-xxxxxxxxxx-4xnsh 0/2 Evicted 0 4h38m
myappsvc-xxxxxxxxxx-5b5mb 0/2 Evicted 0 4h28m
myappsvc-xxxxxxxxxx-5f52g 0/2 Evicted 0 4h19m
myappsvc-xxxxxxxxxx-5f8rz 0/2 Evicted 0 4h31m
myappsvc-xxxxxxxxxx-66lc9 0/2 Evicted 0 4h26m
myappsvc-xxxxxxxxxx-8cnfb 0/2 Evicted 0 4h27m
myappsvc-xxxxxxxxxx-b9f9h 0/2 Evicted 0 4h20m
myappsvc-xxxxxxxxxx-dfx9m 0/2 Evicted 0 4h30m
myappsvc-xxxxxxxxxx-fpwg9 0/2 Evicted 0 4h25m
myappsvc-xxxxxxxxxx-kclt8 0/2 Evicted 0 4h22m
myappsvc-xxxxxxxxxx-kzmxw 0/2 Evicted 0 4h33m
myappsvc-xxxxxxxxxx-lrrnr 2/2 Running 0 4h18m
myappsvc-xxxxxxxxxx-lx4bn 0/2 Evicted 0 4h32m
myappsvc-xxxxxxxxxx-nsc8t 0/2 Evicted 0 4h29m
myappsvc-xxxxxxxxxx-qmlrj 0/2 Evicted 0 4h24m
myappsvc-xxxxxxxxxx-qr75w 0/2 Evicted 0 4h27m
myappsvc-xxxxxxxxxx-tf8bn 0/2 Evicted 0 4h20m
myappsvc-xxxxxxxxxx-vfcdv 0/2 Evicted 0 4h23m
myappsvc-xxxxxxxxxx-vltgw 0/2 Evicted 0 4h31m
myappsvc-xxxxxxxxxx-xhqtb 0/2 Evicted 0 4h22m
Run Code Online (Sandbox Code Playgroud)
这里有很多关于取消K8S 上的CPU 限制的讨论
\nKubernetes 上CPU 限制和请求的最佳实践
\nKubernetes 上内存限制和请求的最佳实践
\n检查pod 的节流率
\n只需登录 pod 并运行即可cat /sys/fs/cgroup/cpu,cpuacct/kubepods/{PODID}/{CONTAINERID}/cpu.stat。
nr_periods\xe2\x80\x94 总计划周期nr_throttled\xe2\x80\x94 总限制周期(nr_periods)throttled_time\xe2\x80\x94 总限制时间(以 ns 为单位)| 归档时间: |
|
| 查看次数: |
3904 次 |
| 最近记录: |