HostPath 将 persistVolume 分配给集群中的特定工作节点

ccd*_*ccd 4 kubernetes persistent-volumes persistent-volume-claims

用于kubeadm创建集群,我有一个主节点和工作节点。

现在我想在工作节点共享一个,它将与podpersistentVolume绑定。Postgres

期望代码将persistentVolume在工作节点的路径中创建/postgres,但似乎hostPath在集群中不起作用,我应该如何将此属性分配给特定节点?

kind: PersistentVolume
apiVersion: v1
metadata:
  name: pv-postgres
  labels:
    type: local
spec:
  capacity:
    storage: 2Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/postgres"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: pvc-postgres
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: postgres
spec:
  selector:
    matchLabels:
      app: postgres
  replicas: 1
  strategy: {}
  template:
    metadata:
      labels:
        app: postgres
    spec:
      dnsPolicy: ClusterFirstWithHostNet
      hostNetwork: true
      volumes:
      - name: vol-postgres
        persistentVolumeClaim:
          claimName: pvc-postgres
      containers:
      - name: postgres
        image: postgres:12
        imagePullPolicy: Always
        env:
        - name: DB_USER
          value: postgres
        - name: DB_PASS
          value: postgres
        - name: DB_NAME
          value: postgres
        ports:
        - name: postgres
          containerPort: 5432
        volumeMounts:
        - mountPath: "/postgres"
          name: vol-postgres
        livenessProbe:
          exec:
            command:
            - pg_isready
            - -h
            - localhost
            - -U
            - postgres
          initialDelaySeconds: 30
          timeoutSeconds: 5
        readinessProbe:
          exec:
            command:
            - pg_isready
            - -h
            - localhost
            - -U
            - postgres
          initialDelaySeconds: 5
          timeoutSeconds: 1
---
apiVersion: v1
kind: Service
metadata:
  name: postgres
spec:
  ports:
  - name: postgres
    port: 5432
    targetPort: postgres
  selector:
    app: postgres
Run Code Online (Sandbox Code Playgroud)

Pjo*_*erS 5

根据文档

\n\n
\n

hostPath 卷将主机节点\xe2\x80\x99s 文件系统中的文件或目录挂载到 Pod 中。这不是大多数 Pod 所需要的,但它为某些应用程序提供了强大的逃生舱口。

\n
\n\n

简而言之,hostPath类型指的是节点(机器或虚拟机)资源,您将在其中调度 pod。这意味着您已经需要在此节点上拥有此文件夹。\n要分配资源来指定节点,您必须在,中使用nodeSelectorDeploymentPV

\n\n

根据具体情况,使用hostPath并不是最好的主意,但是我将提供下面的示例 YAML,它可能会向您展示概念。基于您的 YAML,但使用nginx image.

\n\n
kind: PersistentVolume\napiVersion: v1\nmetadata:\n  name: pv-postgres\nspec:\n  capacity:\n    storage: 2Gi\n  accessModes:\n    - ReadWriteOnce\n  hostPath:\n    path: "/tmp/postgres" ## this folder need exist on your node. Keep in minds also who have permissions to folder. Used tmp as it have 3x rwx\n  nodeAffinity:\n    required:\n      nodeSelectorTerms:\n      - matchExpressions:\n        - key: kubernetes.io/hostname\n          operator: In\n          values:\n          - ubuntu18-kubeadm-worker1    \n\n---\nkind: PersistentVolumeClaim\napiVersion: v1\nmetadata:\n  name: pvc-postgres\nspec:\n  accessModes:\n    - ReadWriteOnce\n  resources:\n    requests:\n      storage: 1Gi\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: postgres\nspec:\n  selector:\n    matchLabels:\n      app: postgres\n  replicas: 1\n  strategy: {}\n  template:\n    metadata:\n      labels:\n        app: postgres\n    spec:\n      containers:\n      - image: nginx\n        name: nginx      \n        volumeMounts:\n        - mountPath: /home    ## path to folder inside container\n          name: vol-postgres\n      affinity:               ## specified affinity to schedule all pods on this specific node with name ubuntu18-kubeadm-worker1\n        nodeAffinity:\n          requiredDuringSchedulingIgnoredDuringExecution:\n            nodeSelectorTerms:\n            - matchExpressions:\n              - key: kubernetes.io/hostname\n                operator: In\n                values:\n                - ubuntu18-kubeadm-worker1  \n      dnsPolicy: ClusterFirstWithHostNet\n      hostNetwork: true\n      volumes:\n      - name: vol-postgres\n        persistentVolumeClaim:\n          claimName: pvc-postgres\n\npersistentvolume/pv-postgres created\npersistentvolumeclaim/pvc-postgres created\ndeployment.apps/postgres created\n
Run Code Online (Sandbox Code Playgroud)\n\n

不幸的是,PV 与 PVC 以 1:1 的关系绑定,因此每次都需要创建 PV 和 PVC。

\n\n

但是,如果您正在使用它,则在YAML中hostPath指定nodeAffinity,volumeMounts和就足够了,而无需和。volumesDeploymentPVPVC

\n\n
apiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: postgres\nspec:\n  selector:\n    matchLabels:\n      app: postgres\n  replicas: 1\n  strategy: {}\n  template:\n    metadata:\n      labels:\n        app: postgres\n    spec:\n      containers:\n      - image: nginx:latest\n        name: nginx      \n        volumeMounts:\n        - mountPath: /home    \n          name: vol-postgres\n      affinity:               \n        nodeAffinity:\n          requiredDuringSchedulingIgnoredDuringExecution:\n            nodeSelectorTerms:\n            - matchExpressions:\n              - key: kubernetes.io/hostname\n                operator: In\n                values:\n                - ubuntu18-kubeadm-worker1  \n      dnsPolicy: ClusterFirstWithHostNet\n      hostNetwork: true\n      volumes:\n      - name: vol-postgres\n        hostPath:\n          path: /tmp/postgres\n\ndeployment.apps/postgres created\n\nuser@ubuntu18-kubeadm-master:~$ kubectl get pods\nNAME                        READY   STATUS    RESTARTS   AGE\npostgres-77bc9c4566-jgxqq   1/1     Running   0          9s\nuser@ubuntu18-kubeadm-master:~$ kk exec -ti postgres-77bc9c4566-jgxqq /bin/bash\nroot@ubuntu18-kubeadm-worker1:/# cd home\nroot@ubuntu18-kubeadm-worker1:/home# ls\ntest.txt  txt.txt\n
Run Code Online (Sandbox Code Playgroud)\n