我开始在我的服务中引入liveness 和 readiness 探测器,我不确定我是否成功地让它工作,因为我无法自信地解释kubectl.
kubectl describe pod mypod 给我这样的东西:
Name: myapp-5798dd798c-t7dqs
Namespace: dev
Node: docker-for-desktop/192.168.65.3
Start Time: Wed, 24 Oct 2018 13:22:54 +0200
Labels: app=myapp
pod-template-hash=1354883547
Annotations: version: v2
Status: Running
IP: 10.1.0.103
Controlled By: ReplicaSet/myapp-5798dd798c
Containers:
myapp:
Container ID: docker://5d39cb47d2278eccd6d28c1eb35f93112e3ad103485c1c825de634a490d5b736
Image: myapp:latest
Image ID: docker://sha256:61dafd0c208e2519d0165bf663e4b387ce4c2effd9237fb29fb48d316eda07ff
Port: 80/TCP
Host Port: 0/TCP
State: Running
Started: Wed, 24 Oct 2018 13:23:06 +0200
Ready: True
Restart Count: 0
Liveness: http-get http://:80/healthz/live delay=0s timeout=10s period=60s #success=1 #failure=3
Readiness: http-get http://:80/healthz/ready delay=3s timeout=3s period=5s #success=1 #failure=3
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-gvnc2 (ro)
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Volumes:
default-token-gvnc2:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-gvnc2
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 84s default-scheduler Successfully assigned myapp-5798dd798c-t7dqs to docker-for-desktop
Normal SuccessfulMountVolume 84s kubelet, docker-for-desktop MountVolume.SetUp succeeded for volume "default-token-gvnc2"
Normal Pulled 75s kubelet, docker-for-desktop Container image "myapp:latest" already present on machine
Normal Created 74s kubelet, docker-for-desktop Created container
Normal Started 72s kubelet, docker-for-desktop Started container
Warning Unhealthy 65s kubelet, docker-for-desktop Readiness probe failed: Get http://10.1.0.103:80/healthz/ready: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Run Code Online (Sandbox Code Playgroud)
现在,我注意到containerhas Status: Ready,但事件列表中的最后一个事件将状态列为Unhealthy由于准备就绪探测失败。(查看应用程序日志,我可以看到从那以后,有更多传入的请求发送到就绪探针,并且它们都成功了。)
我应该如何解释这些信息?Kubernetes 是否认为我的 pod 已准备好,还是未准备好?
当 Pod 的所有容器的就绪探测返回成功时,该 Pod 就已准备就绪。在您的情况下,就绪探测在第一次尝试中失败,但下一次探测成功并且容器进入就绪状态。下面是失败的就绪探测示例
下面的就绪探测器在最后 11m 内探测了 58 次,但失败了。
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 11m default-scheduler Successfully assigned default/upnready to mylabserver.com
Normal Pulling 11m kubelet, mylabserver.com pulling image "luksa/kubia:v3"
Normal Pulled 11m kubelet, mylabserver.com Successfully pulled image "luksa/kubia:v3"
Normal Created 11m kubelet, mylabserver.com Created container
Normal Started 11m kubelet, mylabserver.com Started container
Warning Unhealthy 103s (x58 over 11m) kubelet, mylabserver.com Readiness probe failed: Get http://10.44.0.123:80/: dial tcp 10.44.0.123:80: connect:
Run Code Online (Sandbox Code Playgroud)
容器状态也未就绪,如下所示
kubectl get pods -l run=upnready
NAME READY STATUS RESTARTS AGE
upnready 0/1 Running 0 17m
Run Code Online (Sandbox Code Playgroud)
在您的情况下,就绪探针通过了运行状况检查,并且您的 Pod 处于就绪状态。
您可以有效地利用initialDelaySeconds、periodSeconds、timeoutSeconds 来获得更好的结果。这是一篇文章。
| 归档时间: |
|
| 查看次数: |
1888 次 |
| 最近记录: |