如何将一个Pod与Kubernetes中的另一个Pod联网?(简单)

Pet*_*and 4 deployment networking containers web-deployment kubernetes

我一直在脑海里反复地撞着脑袋一会儿。Web上有大量关于Kubernetes的信息,但是所有这些假设都假设知识太多,以至于像我这样的n00b并没有太多的事情要做。

因此,任何人都可以共享以下简单示例(作为yaml文件)吗?我想要的就是

  • 两个豆荚
  • 假设一个pod有一个后端(我不知道-node.js),一个有前端(比如React)。
  • 他们之间建立联系的一种方式。

然后是从背面到正面调用api调用的示例。

我开始研究这种事情,突然间我点击了此页面-https://kubernetes.io/docs/concepts/cluster-administration/networking/#how-to-achieve-this。这是超级无益的。我不需要或不需要高级网络策略,也没有时间浏览映射在kubernetes顶部的几个不同的服务层。我只想找出一个简单的网络请求示例。

希望如果这个例子存在于stackoverflow上,它将同样为其他人服务。

任何帮助,将不胜感激。谢谢。

编辑; 看起来最简单的示例可能是使用Ingress控制器。

编辑

我正在努力尝试部署一个最小的示例-我将在这里逐步完成一些步骤并指出我的问题。

所以下面是我的yaml文件:

apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: frontend
  labels:
    app: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: frontend
  template:
    metadata:
      labels:
        app: frontend
    spec:
      containers:
      - name: nginx
        image: patientplatypus/frontend_example
        ports:
        - containerPort: 3000
---
apiVersion: v1
kind: Service
metadata:
  name: frontend
spec:
  type: LoadBalancer
  selector:
    app: frontend
  ports:
    - protocol: TCP
      port: 80
      targetPort: 3000
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: backend
  labels:
    app: backend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: backend
  template:
    metadata:
      labels:
        app: backend
    spec:
      containers:
      - name: nginx
        image: patientplatypus/backend_example
        ports:
        - containerPort: 5000
---
apiVersion: v1
kind: Service
metadata:
  name: backend
spec:
  type: LoadBalancer
  selector:
    app: backend
  ports:
    - protocol: TCP
      port: 80
      targetPort: 5000
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: frontend
spec:      
  rules:
  - host: www.kubeplaytime.example
    http:
      paths:
      - path: /
        backend:
          serviceName: frontend
          servicePort: 80
      - path: /api
        backend:
          serviceName: backend
          servicePort: 80
Run Code Online (Sandbox Code Playgroud)

我相信这是在做

  • 部署前端和后端应用程序-我已部署patientplatypus/frontend_example并部署patientplatypus/backend_example到dockerhub,然后将图像下拉。我有一个开放的问题,如果我不想从docker hub提取图像,而只想从本地主机加载,那有可能吗?在这种情况下,我会将代码推送到生产服务器,在服务器上构建docker映像,然后上传到kubernetes。好处是,如果我希望图像是私有的,则不必依赖dockerhub。

  • 它正在创建两个服务端点,这些端点将外部流量从Web浏览器路由到每个部署。这些服务是类型的,loadBalancer因为它们平衡了部署中我拥有的(在本例中为3个)副本集之间的流量。

  • 最后,我有一个入口控制器,该控制器应该允许我的服务通过www.kubeplaytime.example和相互路由www.kubeplaytime.example/api。但是,这不起作用。

运行此命令会怎样?

patientplatypus:~/Documents/kubePlay:09:17:50$kubectl create -f kube-deploy.yaml
deployment.apps "frontend" created
service "frontend" created
deployment.apps "backend" created
service "backend" created
ingress.extensions "frontend" created
Run Code Online (Sandbox Code Playgroud)
  • 因此,首先,它似乎可以正确创建所有我需要的零件。

    patientplatypus:~/Documents/kubePlay:09:22:30$kubectl get --watch services

    NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE

    backend LoadBalancer 10.0.18.174 <pending> 80:31649/TCP 1m

    frontend LoadBalancer 10.0.100.65 <pending> 80:32635/TCP 1m

    kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 10d

    frontend LoadBalancer 10.0.100.65 138.91.126.178 80:32635/TCP 2m

    backend LoadBalancer 10.0.18.174 138.91.121.182 80:31649/TCP 2m

  • 其次,如果我观看了这些服务,最终将获得可用于在浏览器中导航到这些站点的IP地址。上面的每个IP地址都可以分别将我路由到前端和后端。

然而

我在尝试使用入口控制器时遇到了一个问题-它似乎已部署,但我不知道如何到达那里。

patientplatypus:~/Documents/kubePlay:09:24:44$kubectl get ingresses
NAME       HOSTS                      ADDRESS   PORTS     AGE
frontend   www.kubeplaytime.example             80        16m
Run Code Online (Sandbox Code Playgroud)
  • 因此,我没有可以使用的地址,并且www.kubeplaytime.example似乎无法使用。

路由到我刚创建的入口扩展名似乎必须要做的就是使用服务并在其上进行部署以获得IP地址,但这很快就变得异常复杂。

例如,看一下这篇中等文章:https : //medium.com/@cashisclay/kubernetes-ingress-82aa960f658e

似乎仅用于将服务路由添加到Ingress的必要代码(即他所谓的Ingress Controller)似乎是这样的:

---
kind: Service
apiVersion: v1
metadata:
  name: ingress-nginx
spec:
  type: LoadBalancer
  selector:
    app: ingress-nginx
  ports:
  - name: http
    port: 80
    targetPort: http
  - name: https
    port: 443
    targetPort: https
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  name: ingress-nginx
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: ingress-nginx
    spec:
      terminationGracePeriodSeconds: 60
      containers:
      - image: gcr.io/google_containers/nginx-ingress-controller:0.8.3
        name: ingress-nginx
        imagePullPolicy: Always
        ports:
          - name: http
            containerPort: 80
            protocol: TCP
          - name: https
            containerPort: 443
            protocol: TCP
        livenessProbe:
          httpGet:
            path: /healthz
            port: 10254
            scheme: HTTP
          initialDelaySeconds: 30
          timeoutSeconds: 5
        env:
          - name: POD_NAME
            valueFrom:
              fieldRef:
                fieldPath: metadata.name
          - name: POD_NAMESPACE
            valueFrom:
              fieldRef:
                fieldPath: metadata.namespace
        args:
        - /nginx-ingress-controller
        - --default-backend-service=$(POD_NAMESPACE)/nginx-default-backend
---
kind: Service
apiVersion: v1
metadata:
  name: nginx-default-backend
spec:
  ports:
  - port: 80
    targetPort: http
  selector:
    app: nginx-default-backend
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  name: nginx-default-backend
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: nginx-default-backend
    spec:
      terminationGracePeriodSeconds: 60
      containers:
      - name: default-http-backend
        image: gcr.io/google_containers/defaultbackend:1.0
        livenessProbe:
          httpGet:
            path: /healthz
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 30
          timeoutSeconds: 5
        resources:
          limits:
            cpu: 10m
            memory: 20Mi
          requests:
            cpu: 10m
            memory: 20Mi
        ports:
        - name: http
          containerPort: 8080
          protocol: TCP
Run Code Online (Sandbox Code Playgroud)

yaml为了获得我的入口路由的服务入口点,似乎需要将其附加到上面的其他代码中,并且确实提供了ip:

patientplatypus:~/Documents/kubePlay:09:54:12$kubectl get --watch services
NAME                    TYPE           CLUSTER-IP    EXTERNAL-IP   PORT(S)                      AGE
backend                 LoadBalancer   10.0.31.209   <pending>     80:32428/TCP                 4m
frontend                LoadBalancer   10.0.222.47   <pending>     80:32482/TCP                 4m
ingress-nginx           LoadBalancer   10.0.28.157   <pending>     80:30573/TCP,443:30802/TCP   4m
kubernetes              ClusterIP      10.0.0.1      <none>        443/TCP                      10d
nginx-default-backend   ClusterIP      10.0.71.121   <none>        80/TCP                       4m
frontend   LoadBalancer   10.0.222.47   40.121.7.66   80:32482/TCP   5m
ingress-nginx   LoadBalancer   10.0.28.157   40.121.6.179   80:30573/TCP,443:30802/TCP   6m
backend   LoadBalancer   10.0.31.209   40.117.248.73   80:32428/TCP   7m
Run Code Online (Sandbox Code Playgroud)

因此,ingress-nginx似乎是我想要前往的网站。导航到40.121.6.179将返回默认的404消息(default backend - 404)-不会frontend/路由那样去。/api返回相同。导航到我的主机名称空间会www.kubeplaytime.example从浏览器返回404-没有错误处理。

问题

  • 严格要求Ingress Controller吗?如果需要,是否有一个不太复杂的版本?

  • 我感觉自己很近,我在做什么错?

完整的YAML

在这里可用:https : //gist.github.com/ Patientplatypus / fa07648339ee6538616cb69282a84938

谢谢您的帮助!

编辑编辑

我尝试使用HELM。从表面上看,它似乎是一个简单的界面,因此我尝试将其旋转:

patientplatypus:~/Documents/kubePlay:12:13:00$helm install stable/nginx-ingress
NAME:   erstwhile-beetle
LAST DEPLOYED: Sun May  6 12:13:30 2018
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1/ConfigMap
NAME                                       DATA  AGE
erstwhile-beetle-nginx-ingress-controller  1     1s

==> v1/Service
NAME                                            TYPE          CLUSTER-IP   EXTERNAL-IP  PORT(S)                     AGE
erstwhile-beetle-nginx-ingress-controller       LoadBalancer  10.0.216.38  <pending>    80:31494/TCP,443:32118/TCP  1s
erstwhile-beetle-nginx-ingress-default-backend  ClusterIP     10.0.55.224  <none>       80/TCP                      1s

==> v1beta1/Deployment
NAME                                            DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE
erstwhile-beetle-nginx-ingress-controller       1        1        1           0          1s
erstwhile-beetle-nginx-ingress-default-backend  1        1        1           0          1s

==> v1beta1/PodDisruptionBudget
NAME                                            MIN AVAILABLE  MAX UNAVAILABLE  ALLOWED DISRUPTIONS  AGE
erstwhile-beetle-nginx-ingress-controller       1              N/A              0                    1s
erstwhile-beetle-nginx-ingress-default-backend  1              N/A              0                    1s

==> v1/Pod(related)
NAME                                                             READY  STATUS             RESTARTS  AGE
erstwhile-beetle-nginx-ingress-controller-7df9b78b64-24hwz       0/1    ContainerCreating  0         1s
erstwhile-beetle-nginx-ingress-default-backend-849b8df477-gzv8w  0/1    ContainerCreating  0         1s


NOTES:
The nginx-ingress controller has been installed.
It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status by running 'kubectl --namespace default get services -o wide -w erstwhile-beetle-nginx-ingress-controller'

An example Ingress that makes use of the controller:

  apiVersion: extensions/v1beta1
  kind: Ingress
  metadata:
    annotations:
      kubernetes.io/ingress.class: nginx
    name: example
    namespace: foo
  spec:
    rules:
      - host: www.example.com
        http:
          paths:
            - backend:
                serviceName: exampleService
                servicePort: 80
              path: /
    # This section is only required if TLS is to be enabled for the Ingress
    tls:
        - hosts:
            - www.example.com
          secretName: example-tls

If TLS is enabled for the Ingress, a Secret containing the certificate and key must also be provided:

  apiVersion: v1
  kind: Secret
  metadata:
    name: example-tls
    namespace: foo
  data:
    tls.crt: <base64 encoded cert>
    tls.key: <base64 encoded key>
  type: kubernetes.io/tls
Run Code Online (Sandbox Code Playgroud)

看来这确实很棒-它可以将所有内容旋转起来,并提供了有关如何添加入口的示例。因为我将头盔旋转成空白,kubectl所以使用以下yaml文件添加了我认为需要的文件。

文件:

apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: frontend
  labels:
    app: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: frontend
  template:
    metadata:
      labels:
        app: frontend
    spec:
      containers:
      - name: nginx
        image: patientplatypus/frontend_example
        ports:
        - containerPort: 3000
---
apiVersion: v1
kind: Service
metadata:
  name: frontend
spec:
  type: LoadBalancer
  selector:
    app: frontend
  ports:
    - protocol: TCP
      port: 80
      targetPort: 3000
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: backend
  labels:
    app: backend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: backend
  template:
    metadata:
      labels:
        app: backend
    spec:
      containers:
      - name: nginx
        image: patientplatypus/backend_example
        ports:
        - containerPort: 5000
---
apiVersion: v1
kind: Service
metadata:
  name: backend
spec:
  type: LoadBalancer
  selector:
    app: backend
  ports:
    - protocol: TCP
      port: 80
      targetPort: 5000
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    kubernetes.io/ingress.class: nginx
spec:
  rules:
    - host: www.example.com
      http:
        paths:
          - path: /api
            backend:
              serviceName: backend
              servicePort: 80
          - path: /
            frontend:
              serviceName: frontend
              servicePort: 80
Run Code Online (Sandbox Code Playgroud)

但是,将其部署到群集时会遇到此错误:

patientplatypus:~/Documents/kubePlay:11:44:20$kubectl create -f kube-deploy.yaml
deployment.apps "frontend" created
service "frontend" created
deployment.apps "backend" created
service "backend" created
error: error validating "kube-deploy.yaml": error validating data: [ValidationError(Ingress.spec.rules[0].http.paths[1]): unknown field "frontend" in io.k8s.api.extensions.v1beta1.HTTPIngressPath, ValidationError(Ingress.spec.rules[0].http.paths[1]): missing required field "backend" in io.k8s.api.extensions.v1beta1.HTTPIngressPath]; if you choose to ignore these errors, turn validation off with --validate=false
Run Code Online (Sandbox Code Playgroud)

那么,问题就变成了,废话我该如何调试呢?如果您吐出头盔产生的代码,那么它基本上是一个人无法读取的-无法进入那里弄清楚正在发生的事情。

检查一下:https : //gist.github.com/ Patientplatypus / 0e281bf61307f02e16e0091397a1d863-超过1000行!

如果有人能更好地调试头盔部署,请将其添加到未解决问题列表中。

编辑编辑编辑

为了简化我尝试仅使用名称空间从一个Pod到另一个Pod进行呼叫。

所以这是我发出HTTP请求的React代码:

axios.get('http://backend/test')
.then(response=>{
  console.log('return from backend and response: ', response);
})
.catch(error=>{
  console.log('return from backend and error: ', error);
})
Run Code Online (Sandbox Code Playgroud)

我也尝试过http://backend.exampledeploy.svc.cluster.local/test运气不好的情况。

这是我处理get的节点代码:

router.get('/test', function(req, res, next) {
  res.json({"test":"test"})
});
Run Code Online (Sandbox Code Playgroud)

这是我yaml上传到kubectl集群的文件:

apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: frontend
  namespace: exampledeploy
  labels:
    app: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: frontend
  template:
    metadata:
      labels:
        app: frontend
    spec:
      containers:
      - name: nginx
        image: patientplatypus/frontend_example
        ports:
        - containerPort: 3000
---
apiVersion: v1
kind: Service
metadata:
  name: frontend
  namespace: exampledeploy
spec:
  type: LoadBalancer
  selector:
    app: frontend
  ports:
    - protocol: TCP
      port: 80
      targetPort: 3000
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: backend
  namespace: exampledeploy
  labels:
    app: backend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: backend
  template:
    metadata:
      labels:
        app: backend
    spec:
      containers:
      - name: nginx
        image: patientplatypus/backend_example
        ports:
        - containerPort: 5000
---
apiVersion: v1
kind: Service
metadata:
  name: backend
  namespace: exampledeploy
spec:
  type: LoadBalancer
  selector:
    app: backend
  ports:
    - protocol: TCP
      port: 80
      targetPort: 5000
Run Code Online (Sandbox Code Playgroud)

在终端中可以看到,上传到集群的工作似乎很正常:

patientplatypus:~/Documents/kubePlay:14:33:20$kubectl get all --namespace=exampledeploy 
NAME                            READY     STATUS    RESTARTS   AGE
pod/backend-584c5c59bc-5wkb4    1/1       Running   0          15m
pod/backend-584c5c59bc-jsr4m    1/1       Running   0          15m
pod/backend-584c5c59bc-txgw5    1/1       Running   0          15m
pod/frontend-647c99cdcf-2mmvn   1/1       Running   0          15m
pod/frontend-647c99cdcf-79sq5   1/1       Running   0          15m
pod/frontend-647c99cdcf-r5bvg   1/1       Running   0          15m

NAME               TYPE           CLUSTER-IP     EXTERNAL-IP      PORT(S)        AGE
service/backend    LoadBalancer   10.0.112.160   168.62.175.155   80:31498/TCP   15m
service/frontend   LoadBalancer   10.0.246.212   168.62.37.100    80:31139/TCP   15m

NAME                             DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deployment.extensions/backend    3         3         3            3           15m
deployment.extensions/frontend   3         3         3            3           15m

NAME                                        DESIRED   CURRENT   READY     AGE
replicaset.extensions/backend-584c5c59bc    3         3         3         15m
replicaset.extensions/frontend-647c99cdcf   3         3         3         15m

NAME                       DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/backend    3         3         3            3           15m
deployment.apps/frontend   3         3         3            3           15m

NAME                                  DESIRED   CURRENT   READY     AGE
replicaset.apps/backend-584c5c59bc    3         3         3         15m
replicaset.apps/frontend-647c99cdcf   3         3         3         15m
Run Code Online (Sandbox Code Playgroud)

但是,当我尝试发出请求时,出现以下错误:

return from backend and error:  
Error: Network Error
Stack trace:
createError@http://168.62.37.100/static/js/bundle.js:1555:15
handleError@http://168.62.37.100/static/js/bundle.js:1091:14
App.js:14
Run Code Online (Sandbox Code Playgroud)

由于axios调用是从浏览器进行的,因此我想知道,即使后端和前端位于不同的容器中,也无法使用此方法来调用后端。我有点迷茫,因为我认为这是将Pod联网在一起的最简单方法。

编辑X5

我已经确定可以像这样通过执行到pod中来从命令行卷曲后端:

patientplatypus:~/Documents/kubePlay:15:25:25$kubectl exec -ti frontend-647c99cdcf-5mfz4 --namespace=exampledeploy -- curl -v http://backend/test
* Hostname was NOT found in DNS cache
*   Trying 10.0.249.147...
* Connected to backend (10.0.249.147) port 80 (#0)
> GET /test HTTP/1.1
> User-Agent: curl/7.38.0
> Host: backend
> Accept: */*
> 
< HTTP/1.1 200 OK
< X-Powered-By: Express
< Content-Type: application/json; charset=utf-8
< Content-Length: 15
< ETag: W/"f-SzkCEKs7NV6rxiz4/VbpzPnLKEM"
< Date: Sun, 06 May 2018 20:25:49 GMT
< Connection: keep-alive
< 
* Connection #0 to host backend left intact
{"test":"test"}
Run Code Online (Sandbox Code Playgroud)

毫无疑问,这意味着,因为前端代码是在浏览器中执行的,它需要Ingress才能进入Pod,因为前端的http请求是简单Pod联网所无法解决的。我不确定这一点,但这意味着Ingress是必要的。

hel*_*ert 6

首先,让我们澄清一些明显的误解。您提到您的前端是一个React应用程序,大概会在用户浏览器中运行。为此,您的实际问题不是后端和前端Pod 相互通信,而是浏览器需要能够连接到这两个Pod(前端Pod才能加载React)。应用程序,并连接到React应用程序的后端Pod以进行API调用)。

可视化:

                                                 +---------+
                                             +---| Browser |---+                                                 
                                             |   +---------+   |
                                             V                 V
+-----------+     +----------+         +-----------+     +----------+
| Front-end |---->| Back-end |         | Front-end |     | Back-end |
+-----------+     +----------+         +-----------+     +----------+
      (what you asked for)                     (what you need)
Run Code Online (Sandbox Code Playgroud)

如前所述,最简单的解决方案是使用Ingress控制器。在这里,我不会详细介绍如何设置Ingress控制器;在某些云环境(例如GKE)中,您将能够使用由云提供商提供给您的Ingress控制器。否则,您可以设置NGINX Ingress控制器。有关更多信息,请参阅NGINX Ingress控制器部署指南

定义服务

首先为您的前端和后端应用程序定义服务资源(这些资源还使您的Pod可以相互通信)。服务定义可能如下所示:

apiVersion: v1
kind: Service
metadata:
  name: backend
spec:
  selector:
    app: backend
  ports:
    - protocol: TCP
      port: 80
      targetPort: 8080
Run Code Online (Sandbox Code Playgroud)

确保您的Pod具有可通过Service资源选择的标签(在本示例中,我使用app=backendapp=frontend作为标签)。

如果您要建立Pod到Pod的通信,请立即完成。在每个Pod中,您现在可以使用backend.<namespace>.svc.cluster.local(或backend作为简写)和frontend主机名来连接到该Pod。

定义入口

接下来,您可以定义Ingress资源;由于这两种服务都需要从群集外部(用户浏览器)进行连接,因此您将需要两种服务的Ingress定义。

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: frontend
spec:      
  rules:
  - host: www.your-application.example
    http:
      paths:
      - path: /
        backend:
          serviceName: frontend
          servicePort: 80
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: backend
spec:      
  rules:
  - host: api.your-application.example
    http:
      paths:
      - path: /
        backend:
          serviceName: backend
          servicePort: 80
Run Code Online (Sandbox Code Playgroud)

另外,您也可以使用单个Ingress资源聚合前端和后端(这里没有“正确”的答案,只是一个优先事项):

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: frontend
spec:      
  rules:
  - host: www.your-application.example
    http:
      paths:
      - path: /
        backend:
          serviceName: frontend
          servicePort: 80
      - path: /api
        backend:
          serviceName: backend
          servicePort: 80
Run Code Online (Sandbox Code Playgroud)

在此之后,确保两个www.your-application.exampleapi.your-application.example指向自己的Ingress控制器的外部IP地址,你应该做的。


Pet*_*and 6

事实证明,我把事情复杂化了。这是可以执行我想要的操作的 Kubernetes 文件。您可以使用两个部署(前端和后端)和一个服务入口点来执行此操作。据我所知,一个服务可以负载均衡到许多(不仅仅是 2 个)不同的部署,这意味着对于实际开发来说,这应该是微服务开发的一个良好开端。入口方法的好处之一是允许使用路径名而不是端口号,但考虑到它在开发中的难度似乎并不实用。

这是yaml文件:

apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: frontend
  labels:
    app: exampleapp
spec:
  replicas: 3
  selector:
    matchLabels:
      app: exampleapp
  template:
    metadata:
      labels:
        app: exampleapp
    spec:
      containers:
      - name: nginx
        image: patientplatypus/kubeplayfrontend
        ports:
        - containerPort: 3000
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: backend
  labels:
    app: exampleapp
spec:
  replicas: 3
  selector:
    matchLabels:
      app: exampleapp
  template:
    metadata:
      labels:
        app: exampleapp
    spec:
      containers:
      - name: nginx
        image: patientplatypus/kubeplaybackend
        ports:
        - containerPort: 5000
---
apiVersion: v1
kind: Service
metadata:
  name: entrypt
spec:
  type: LoadBalancer
  ports:
  - name: backend
    port: 8080
    targetPort: 5000
  - name: frontend
    port: 81
    targetPort: 3000
  selector:
    app: exampleapp
Run Code Online (Sandbox Code Playgroud)

这是我用来让它启动的 bash 命令(你可能需要添加一个登录命令 - docker login- 推送到 dockerhub):

#!/bin/bash

# stop all containers
echo stopping all containers
docker stop $(docker ps -aq)
# remove all containers
echo removing all containers
docker rm $(docker ps -aq)
# remove all images
echo removing all images
docker rmi $(docker images -q)

echo building backend
cd ./backend
docker build -t patientplatypus/kubeplaybackend .
echo push backend to dockerhub
docker push patientplatypus/kubeplaybackend:latest

echo building frontend
cd ../frontend
docker build -t patientplatypus/kubeplayfrontend .
echo push backend to dockerhub
docker push patientplatypus/kubeplayfrontend:latest

echo now working on kubectl
cd ..
echo deleting previous variables
kubectl delete pods,deployments,services entrypt backend frontend
echo creating deployment
kubectl create -f kube-deploy.yaml
echo watching services spin up
kubectl get services --watch
Run Code Online (Sandbox Code Playgroud)

实际代码只是一个前端反应应用程序,它对componentDidMount起始应用程序页面上的后端节点路由进行 axios http 调用。

您还可以在此处查看工作示例:https : //github.com/patientplatypus/KubernetesMultiPodCommunication

再次感谢大家的帮助。