NodePort服务并非在所有节点上都可用

E. *_*tle 3 kubernetes kube-proxy

我正在尝试运行一个3节点的Kubernetes集群。我已经建立了集群并充分运行,以使服务在不同的节点上运行。不幸的是,我似乎无法使基于NodePort的服务正常工作(因为我仍然理解正确性……)。我的问题是,我定义的任何NodePort服务仅在运行其pod的节点上可以在外部使用,而我的理解是,它们在群集中的任何节点上都可以在外部使用。

一个示例是本地Jira服务,该服务应在端口8082(内部)和外部32760上运行。这是服务定义(仅是服务部分):

apiVersion: v1
kind: Service
metadata:
  name: jira
  namespace: wittlesouth
spec:
  ports:
  - port: 8082
  selector:
    app: jira
  type: NodePort
Run Code Online (Sandbox Code Playgroud)

这是kubectl get service --namespace wittle south的输出

NAME       TYPE           CLUSTER-IP      EXTERNAL-IP                       PORT(S)          AGE
jenkins    NodePort       10.100.119.22   <none>                            8081:31377/TCP   3d
jira       NodePort       10.105.148.66   <none>                            8082:32760/TCP   9h
ws-mysql   ExternalName   <none>          mysql.default.svc.cluster.local   3306/TCP         1d
Run Code Online (Sandbox Code Playgroud)

此服务的容器的主机端口设置为8082。群集中的三个节点为nuc1,nuc2,nuc3:

Eric:~ eric$ kubectl get nodes
NAME      STATUS    ROLES     AGE       VERSION
nuc1      Ready     master    3d        v1.9.2
nuc2      Ready     <none>    2d        v1.9.2
nuc3      Ready     <none>    2d        v1.9.2
Run Code Online (Sandbox Code Playgroud)

以下是尝试通过主机和节点端口访问Jira实例的结果:

Eric:~ eric$ curl https://nuc1.wittlesouth.com:8082/
curl: (7) Failed to connect to nuc1.wittlesouth.com port 8082: Connection refused
Eric:~ eric$ curl https://nuc2.wittlesouth.com:8082/
curl: (7) Failed to connect to nuc2.wittlesouth.com port 8082: Connection refused
Eric:~ eric$ curl https://nuc3.wittlesouth.com:8082/
curl: (51) SSL: no alternative certificate subject name matches target host name 'nuc3.wittlesouth.com'
Eric:~ eric$ curl https://nuc3.wittlesouth.com:32760/
curl: (51) SSL: no alternative certificate subject name matches target host name 'nuc3.wittlesouth.com'
Eric:~ eric$ curl https://nuc2.wittlesouth.com:32760/
^C
Eric:~ eric$ curl https://nuc1.wittlesouth.com:32760/
curl: (7) Failed to connect to nuc1.wittlesouth.com port 32760: Operation timed out
Run Code Online (Sandbox Code Playgroud)

根据我的阅读,看来cube-proxy并未执行应有的功能。我尝试通读了多维数据集代理的故障排除文档,它似乎有点过时了(当我在iptables-save中对主机名进行grep检索时,什么都找不到)。这是kubernetes的版本信息:

Eric:~ eric$ kubectl version
Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.1", GitCommit:"3a1c9449a956b6026f075fa3134ff92f7d55f812", GitTreeState:"clean", BuildDate:"2018-01-04T11:52:23Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.2", GitCommit:"5fa2db2bd46ac79e5e00a4e6ed24191080aa463b", GitTreeState:"clean", BuildDate:"2018-01-18T09:42:01Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
Run Code Online (Sandbox Code Playgroud)

看来kube-proxy正在运行:

eric@nuc2:~$ ps waux | grep kube-proxy
root      1963  0.5  0.1  54992 37556 ?        Ssl  21:43   0:02 /usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf
eric      3654  0.0  0.0  14224  1028 pts/0    S+   21:52   0:00 grep --color=auto kube-proxy
Run Code Online (Sandbox Code Playgroud)

Eric:~ eric$ kubectl get pods --namespace=kube-system
NAME                                      READY     STATUS    RESTARTS   AGE
calico-etcd-6vspc                         1/1       Running   3          2d
calico-kube-controllers-d669cc78f-b67rc   1/1       Running   5          3d
calico-node-526md                         2/2       Running   9          3d
calico-node-5trgt                         2/2       Running   3          2d
calico-node-r9ww4                         2/2       Running   3          2d
etcd-nuc1                                 1/1       Running   6          3d
kube-apiserver-nuc1                       1/1       Running   7          3d
kube-controller-manager-nuc1              1/1       Running   6          3d
kube-dns-6f4fd4bdf-dt5fp                  3/3       Running   12         3d
kube-proxy-8xf4r                          1/1       Running   1          2d
kube-proxy-tq4wk                          1/1       Running   4          3d
kube-proxy-wcsxt                          1/1       Running   1          2d
kube-registry-proxy-cv8x9                 1/1       Running   4          3d
kube-registry-proxy-khpdx                 1/1       Running   1          2d
kube-registry-proxy-r5qcv                 1/1       Running   1          2d
kube-registry-v0-wcs5w                    1/1       Running   2          3d
kube-scheduler-nuc1                       1/1       Running   6          3d
kubernetes-dashboard-845747bdd4-dp7gg     1/1       Running   4          3d
Run Code Online (Sandbox Code Playgroud)

看来cube-proxy正在为我的服务创建iptables条目:

eric@nuc1:/var/lib$ sudo iptables-save | grep hostnames
eric@nuc1:/var/lib$ sudo iptables-save | grep jira
-A KUBE-NODEPORTS -p tcp -m comment --comment "wittlesouth/jira:" -m tcp --dport 32760 -j KUBE-MARK-MASQ
-A KUBE-NODEPORTS -p tcp -m comment --comment "wittlesouth/jira:" -m tcp --dport 32760 -j KUBE-SVC-MO7XZ6ASHGM5BOPI
-A KUBE-SEP-LP4GHTW6PY2HYMO6 -s 192.168.124.202/32 -m comment --comment "wittlesouth/jira:" -j KUBE-MARK-MASQ
-A KUBE-SEP-LP4GHTW6PY2HYMO6 -p tcp -m comment --comment "wittlesouth/jira:" -m tcp -j DNAT --to-destination 192.168.124.202:8082
-A KUBE-SERVICES ! -s 10.5.0.0/16 -d 10.105.148.66/32 -p tcp -m comment --comment "wittlesouth/jira: cluster IP" -m tcp --dport 8082 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.105.148.66/32 -p tcp -m comment --comment "wittlesouth/jira: cluster IP" -m tcp --dport 8082 -j KUBE-SVC-MO7XZ6ASHGM5BOPI
-A KUBE-SVC-MO7XZ6ASHGM5BOPI -m comment --comment "wittlesouth/jira:" -j KUBE-SEP-LP4GHTW6PY2HYMO6
Run Code Online (Sandbox Code Playgroud)

不幸的是,目前我对iptables一无所知,所以我不知道这些条目是否正确。我怀疑kubeadm初始化期间的非默认网络设置可能与此有关,因为我试图将Kubernetes设置为不使用我网络的同一IP地址范围(基于192.168)。我使用的kubeadm init语句是:

kubeadm init --pod-network-cidr=10.5.0.0/16 --apiserver-cert-extra-sans ['kubemaster.wittlesouth.com','192.168.5.10'
Run Code Online (Sandbox Code Playgroud)

如果您注意到我正在使用默认为192.168.0.0的Pod网络池的calico,则在创建calico服务时会修改calico的Pod网络池设置(不确定是否相关)。

至此,我得出的结论是我不了解NodePort服务应该如何工作,或者我的集群配置有问题。关于下一步诊断的任何建议将不胜感激!

ner*_*erd 6

定义NodePort服务时,实际上有三个端口在起作用:

  • 容器端口:这是您的Pod实际正在监听的端口,仅当从集群内直接命中容器时,才可用Pod到Pod(JIRA的默认端口为8080)。您将targetPort服务中的设置为此端口。
  • 服务端口:这是服务本身在群集内部公开的负载平衡端口。仅使用一个Pod,就没有负载平衡,但这仍然是您服务的切入点。在port您的服务定义定义了这一点。如果您未指定a,targetPort那么它将假定porttargetPort相同。
  • 节点端口:路由到您的服务的每个辅助节点上公开的端口。该端口通常在30000-33000范围内(取决于群集的配置方式)。这是您可以从群集外部访问的唯一端口。用定义nodePort

假设您在标准端口上运行JIRA,则需要以下服务定义:

apiVersion: v1
kind: Service
metadata:
  name: jira
  namespace: wittlesouth
spec:
  ports:
  - port: 80          # this is the service port, can be anything
    targetPort: 8080  # this is the container port (must match the port your pod is listening on)
    nodePort: 32000   # if you don't specify this it randomly picks an available port in your NodePort range
  selector:
    app: jira
  type: NodePort
Run Code Online (Sandbox Code Playgroud)

因此,如果使用该配置,则会向NodePort服务发送传入请求:NodePort(32000)->服务(80)-> pod(8080)。(内部,它实际上可能会绕过服务,我不知道约100%,但你可以在概念上考虑一下这种方式)。

看来您还想用HTTPS直接打JIRA。您是否在JIRA窗格中配置了证书?如果是这样你需要确保它是一个有效的证书nuc1.wittlesouth.com,或告诉卷曲忽略证书验证错误与curl -k