Ade*_*laN 6 persistent-storage kubernetes
我正在尝试用单个容器创建一个Kubernetes容器,该容器上装有两个外部卷。我的.yml pod文件是:
apiVersion: v1
kind: Pod
metadata:
name: my-project
labels:
name: my-project
spec:
containers:
- image: my-username/my-project
name: my-project
ports:
- containerPort: 80
name: nginx-http
- containerPort: 443
name: nginx-ssl-https
imagePullPolicy: Always
volumeMounts:
- mountPath: /home/projects/my-project/media/upload
name: pd-data
- mountPath: /home/projects/my-project/backups
name: pd2-data
imagePullSecrets:
- name: vpregistrykey
volumes:
- name: pd-data
persistentVolumeClaim:
claimName: pd-claim
- name: pd2-data
persistentVolumeClaim:
claimName: pd2-claim
Run Code Online (Sandbox Code Playgroud)
我正在使用“持久卷”和“持久卷声明”,例如:
PV
apiVersion: v1
kind: PersistentVolume
metadata:
name: pd-disk
labels:
name: pd-disk
spec:
capacity:
storage: 250Gi
accessModes:
- ReadWriteOnce
gcePersistentDisk:
pdName: "pd-disk"
fsType: "ext4"
Run Code Online (Sandbox Code Playgroud)
PVC
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pd-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 250Gi
Run Code Online (Sandbox Code Playgroud)
我最初使用以下命令创建了磁盘:
$ gcloud compute disks create --size 250GB pd-disk
第二个磁盘以及第二个PV和PVC相同。创建pod时,一切似乎都正常,没有引发任何错误。现在出现了一个奇怪的部分:每次我重新启动Pod时,其中一条路径已正确安装(因此是永久性的),另一条路径已删除。
我尝试从头开始重新创建所有内容,但没有任何变化。另外,从pod描述中,两个卷似乎都已正确安装:
$ kubectl describe pod my-project
Name: my-project
...
Volumes:
pd-data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: pd-claim
ReadOnly: false
pd2-data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: pd2-claim
ReadOnly: false
Run Code Online (Sandbox Code Playgroud)
任何帮助表示赞赏。谢谢。
我没有看到任何直接问题导致上述行为发生!但我宁愿要求您尝试的是使用“部署”而不是像这里许多人建议的“Pod” ,特别是在使用 PV 和 PVC 时。部署负责处理许多事情以维持“所需状态”。我在下面附加了我的代码供您参考,该代码有效,即使在删除/终止/重新启动后,两个卷仍然是持久的,因为这是由部署的所需状态管理的。
您会发现我的代码与您的代码有两个区别:
部署 yml.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx
namespace: platform
labels:
component: nginx
spec:
replicas: 2
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
template:
metadata:
labels:
component: nginx
spec:
nodeSelector:
role: app-1
containers:
- name: nginx
image: vip-intOAM:5001/nginx:1.15.3
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: "/etc/nginx/conf.d/"
name: nginx-confd
- mountPath: "/var/www/"
name: nginx-web-content
volumes:
- name: nginx-confd
persistentVolumeClaim:
claimName: glusterfsvol-nginx-confd-pvc
- name: nginx-web-content
persistentVolumeClaim:
claimName: glusterfsvol-nginx-web-content-pvc
Run Code Online (Sandbox Code Playgroud)
我的PV之一
apiVersion: v1
kind: PersistentVolume
metadata:
name: glusterfsvol-nginx-confd-pv
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
glusterfs:
endpoints: gluster-cluster
path: nginx-confd
readOnly: false
persistentVolumeReclaimPolicy: Retain
claimRef:
name: glusterfsvol-nginx-confd-pvc
namespace: platform
Run Code Online (Sandbox Code Playgroud)
上述PVC
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: glusterfsvol-nginx-confd-pvc
namespace: platform
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
Run Code Online (Sandbox Code Playgroud)
归档时间: |
|
查看次数: |
9164 次 |
最近记录: |