Mik*_*row 8 kubernetes google-kubernetes-engine persistent-volumes
我在将 ReadOnlyMany 持久卷挂载到 GKE 上的多个 Pod 时遇到一些问题。目前,它仅安装在一个 Pod 上,无法安装在任何其他 Pod 上(由于第一个 Pod 正在使用该卷),导致部署仅限于一个 Pod。
我怀疑该问题与从卷快照填充的卷有关。
查看相关问题,我已经检查了spec.containers.volumeMounts.readOnly = true和spec.containers.volumes.persistentVolumeClaim.readOnly = true,这似乎是相关问题的最常见修复。
我在下面包含了相关的 yaml。任何帮助将不胜感激!
这是(大部分)部署规范:
spec:
containers:
- env:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /var/secrets/google/key.json
image: eu.gcr.io/myimage
imagePullPolicy: IfNotPresent
name: monsoon-server-sha256-1
resources:
requests:
cpu: 100m
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /mnt/sample-ssd
name: sample-ssd
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
nodeName: gke-cluster-1-default-pool-3d6123cf-kcjo
priority: 0
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 29
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- name: sample-ssd
persistentVolumeClaim:
claimName: sample-ssd-read-snapshot-pvc-snapshot-5
readOnly: true
Run Code Online (Sandbox Code Playgroud)
存储类别(也是该集群的默认存储类别):
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: sample-ssd
provisioner: pd.csi.storage.gke.io
volumeBindingMode: Immediate
parameters:
type: pd-ssd
Run Code Online (Sandbox Code Playgroud)
聚氯乙烯:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: sample-ssd-read-snapshot-pvc-snapshot-5
spec:
storageClassName: sample-ssd
dataSource:
name: sample-snapshot-5
kind: VolumeSnapshot
apiGroup: snapshot.storage.k8s.io
accessModes:
- ReadOnlyMany
resources:
requests:
storage: 20Gi
Run Code Online (Sandbox Code Playgroud)
Google Engineers are aware about this issue.
More details about this issue you can find in issue report and pull request on GitHub.
There's a temporary workaround if you're trying to provision a PD from a snapshot and make it ROX:
It will create a new Compute Disk with the content of the source disk
2. Take the PV that was provisioned and copy it to a new PV that's ROX according to the docs
You can execute it with the following commands:
Provision a PVC with datasource as RWO;
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: workaround-pvc
spec:
storageClassName: ''
dataSource:
name: sample-ss
kind: VolumeSnapshot
apiGroup: snapshot.storage.k8s.io
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
Run Code Online (Sandbox Code Playgroud)
You can check the disk name with command:
kubectl get pvc
and check the VOLUME
column. This is the disk_name
Take the PV that was provisioned and copy it to a new PV that's ROX
As mentioned in the docs you need to create another disk using the previous disk (created in step 1) as source:
# Create a disk snapshot:
gcloud compute disks snapshot <disk_name>
# Create a new disk using snapshot as source
gcloud compute disks create pvc-rox --source-snapshot=<snapshot_name>
Run Code Online (Sandbox Code Playgroud)
Create a new PV and PVC ReadOnlyMany
apiVersion: v1
kind: PersistentVolume
metadata:
name: my-readonly-pv
spec:
storageClassName: ''
capacity:
storage: 20Gi
accessModes:
- ReadOnlyMany
claimRef:
namespace: default
name: my-readonly-pvc
gcePersistentDisk:
pdName: pvc-rox
fsType: ext4
readOnly: true
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-readonly-pvc
spec:
storageClassName: ''
accessModes:
- ReadOnlyMany
resources:
requests:
storage: 20Gi
Run Code Online (Sandbox Code Playgroud)
如此处所述,添加readOnly: true
您的volumes
和volumeMounts
readOnly: true
Run Code Online (Sandbox Code Playgroud)
归档时间: |
|
查看次数: |
2796 次 |
最近记录: |