ClusterIP 服务是否在副本 Pod 之间分发请求?

Mau*_*cio 6 kubernetes

你们知道ClusterIP服务是否在目标部署副本之间分配工作负载吗?

我有一个后端的 5 个副本,一个 ClusterIP 服务选择了它们。我还有另外 5 个 nginx pod 副本指向这个后端部署。但是当我运行一个繁重的请求时,后端停止响应其他请求,直到它完成繁重的请求。

更新

这是我的配置:

注意:我已经替换了一些与公司相关的信息。

内容提供者部署:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name:  frontend
spec:
  replicas: 3
  template:
    metadata:
      labels:
        app: webapp
        tier: frontend
    spec:
      containers:
      - name:  python-gunicorn
        image:  <my-user>/webapp:1.1.2
        command: ["/env/bin/gunicorn", "--bind", "0.0.0.0:8000", "main:app", "--chdir", "/deploy/app", "--error-logfile", "/var/log/gunicorn/error.log", "--timeout", "7200"]
        resources:
          requests:
            # memory: "64Mi"
            cpu: "0.25"
          limits:
            # memory: "128Mi"
            cpu: "0.4"
        ports:
        - containerPort: 8000
        imagePullPolicy: Always
        livenessProbe:
          httpGet:
            path: /login
            port: 8000
          initialDelaySeconds: 30
          timeoutSeconds: 1200
      imagePullSecrets:
        # NOTE: the secret has to be created at the same namespace level on which this deployment was created
        - name: dockerhub
Run Code Online (Sandbox Code Playgroud)

内容提供者服务:

apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: webapp
    tier: frontend
spec:
  # type: LoadBalancer
  ports:
  - port: 8000
    targetPort: 8000
  selector:
    app: webapp
    tier: frontend
Run Code Online (Sandbox Code Playgroud)

Nginx 部署:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 5
  template:
    metadata:
      labels:
        app: nginx
    spec:
      volumes:
      - name: secret-volume
        secret:
          secretName: nginxsecret
      - name: configmap-volume
        configMap:
          name: nginxconfigmap
      containers:
      - name: nginxhttps
        image: ymqytw/nginxhttps:1.5
        command: ["/home/auto-reload-nginx.sh"]
        ports:
        - containerPort: 443
        - containerPort: 80
        livenessProbe:
          httpGet:
            path: /index.html
            port: 80
          initialDelaySeconds: 30
          timeoutSeconds: 1200
        resources:
          requests:
            # memory: "64Mi"
            cpu: "0.1"
          limits:
            # memory: "128Mi"
            cpu: "0.25"
        volumeMounts:
        - mountPath: /etc/nginx/ssl
          name: secret-volume
        - mountPath: /etc/nginx/conf.d
          name: configmap-volume
Run Code Online (Sandbox Code Playgroud)

Nginx 服务:

apiVersion: v1
kind: Service
metadata:
  name: nginxsvc
  labels:
    app: nginxsvc
spec:
  type: LoadBalancer
  ports:
  - port: 80
    protocol: TCP
    name: http
  - port: 443
    protocol: TCP
    name: https
  selector:
    app: nginx
Run Code Online (Sandbox Code Playgroud)

Nginx 配置文件:

server {
    server_name     local.mydomain.com;
    rewrite ^(.*) https://local.mydomain.com$1 permanent;
}

server {
        listen 80 default_server;
        listen [::]:80 default_server ipv6only=on;

        listen 443 ssl;

        root /usr/share/nginx/html;
        index index.html;

        keepalive_timeout    70;
        server_name www.local.mydomain.com local.mydomain.com;
        ssl_certificate /etc/nginx/ssl/tls.crt;
        ssl_certificate_key /etc/nginx/ssl/tls.key;

        location / {
            proxy_pass  http://localhost:8000;
            proxy_connect_timeout       7200;
            proxy_send_timeout          7200;
            proxy_read_timeout          7200;
            send_timeout                7200;
    }
}
Run Code Online (Sandbox Code Playgroud)

Vik*_*ote 6

是的,服务类型ClusterIP使用kube-proxy'siptables规则以某种round robin方式大致均匀地分布请求。

文档说:

默认情况下,后端的选择是循环。

虽然 round robin请求分布可能会受到以下因素的影响:

  1. 繁忙的后端
  2. 粘性会话
  3. 基于连接(如果后端 Pod 已与用户建立 TCP 会话或安全隧道) ClusterIP多次)
  4. iptablesKubernetes 之外的自定义主机级/节点级规则

  • @Mauricio Upvote 我的答案,如果您认为它有用,请接受它,以便其他人知道这是正确的答案,谢谢!https://kubernetes.io/docs/concepts/services-networking/service/#proxy-mode-userspace 说 _“到这个“代理端口”的任何‘连接’都将被代理到服务的后端 Pod 之一”_。如果请求_不_基于连接 (UDP),则循环分配发生在请求级别。我在测试中都看到了。 (2认同)