Jes*_*ler 8 ceph rook-storage kubernetes-rook
我使用 rook 构建 ceph 集群。但是我的 pvc 陷入待处理状态。当我使用 kubectl describe pvc 时,我发现来自 persistencevolume-controller 的事件:
\nwaiting for a volume to be created, either by external provisioner "rook-ceph.rbd.csi.ceph.com" or manually created by system administrator\n
Run Code Online (Sandbox Code Playgroud)\n我的所有 Pod 都处于运行状态:
\nNAME READY STATUS RESTARTS AGE\ncsi-cephfsplugin-ntqk6 3/3 Running 0 14d\ncsi-cephfsplugin-pqxdw 3/3 Running 6 14d\ncsi-cephfsplugin-provisioner-c68f789b8-dt4jf 6/6 Running 49 14d\ncsi-cephfsplugin-provisioner-c68f789b8-rn42r 6/6 Running 73 14d\ncsi-rbdplugin-6pgf4 3/3 Running 0 14d\ncsi-rbdplugin-l8fkm 3/3 Running 6 14d\ncsi-rbdplugin-provisioner-6c75466c49-tzqcr 6/6 Running 106 14d\ncsi-rbdplugin-provisioner-6c75466c49-x8675 6/6 Running 17 14d\nrook-ceph-crashcollector-compute08.dc-56b86f7c4c-9mh2j 1/1 Running 2 12d\nrook-ceph-crashcollector-compute09.dc-6998676d86-wpsrs 1/1 Running 0 12d\nrook-ceph-crashcollector-compute10.dc-684599bcd8-7hzlc 1/1 Running 0 12d\nrook-ceph-mgr-a-69fd54cccf-tjkxh 1/1 Running 200 12d\nrook-ceph-mon-at-8568b88589-2bm5h 1/1 Running 0 4d3h\nrook-ceph-mon-av-7b4444c8f4-2mlpc 1/1 Running 0 4d1h\nrook-ceph-mon-aw-7df9f76fcd-zzmkw 1/1 Running 0 4d1h\nrook-ceph-operator-7647888f87-zjgsj 1/1 Running 1 15d\nrook-ceph-osd-0-6db4d57455-p4cz9 1/1 Running 2 12d\nrook-ceph-osd-1-649d74dc6c-5r9dj 1/1 Running 0 12d\nrook-ceph-osd-2-7c57d4498c-dh6nk 1/1 Running 0 12d\nrook-ceph-osd-prepare-compute08.dc-gxt8p 0/1 Completed 0 3h9m\nrook-ceph-osd-prepare-compute09.dc-wj2fp 0/1 Completed 0 3h9m\nrook-ceph-osd-prepare-compute10.dc-22kth 0/1 Completed 0 3h9m\nrook-ceph-tools-6b4889fdfd-d6xdg 1/1 Running 0 12d\n
Run Code Online (Sandbox Code Playgroud)\n这里是kubectl logs -n rook-ceph csi-cephfsplugin-provisioner-c68f789b8-dt4jf csi-provisioner
I0120 11:57:13.283362 1 csi-provisioner.go:121] Version: v2.0.0\nI0120 11:57:13.283493 1 csi-provisioner.go:135] Building kube configs for running in cluster...\nI0120 11:57:13.294506 1 connection.go:153] Connecting to unix:///csi/csi-provisioner.sock\nI0120 11:57:13.294984 1 common.go:111] Probing CSI driver for readiness\nW0120 11:57:13.296379 1 metrics.go:142] metrics endpoint will not be started because `metrics-address` was not specified.\nI0120 11:57:13.299629 1 leaderelection.go:243] attempting to acquire leader lease rook-ceph/rook-ceph-cephfs-csi-ceph-com...\n
Run Code Online (Sandbox Code Playgroud)\n这是工具箱容器中的 ceph 状态:
\ncluster:\n id: 0b71fd4c-9731-4fea-81a7-1b5194e14204\n health: HEALTH_ERR\n Module 'dashboard' has failed: [('x509 certificate routines', 'X509_check_private_key', 'key values mismatch')]\n Degraded data redundancy: 2/6 objects degraded (33.333%), 1 pg degraded, 1 pg undersized\n 1 pgs not deep-scrubbed in time\n 1 pgs not scrubbed in time\n services:\n mon: 3 daemons, quorum at,av,aw (age 4d)\n mgr: a(active, since 4d)\n osd: 3 osds: 3 up (since 12d), 3 in (since 12d)\n data:\n pools: 1 pools, 1 pgs\n objects: 2 objects, 0 B\n usage: 3.3 GiB used, 3.2 TiB / 3.2 TiB avail\n pgs: 2/6 objects degraded (33.333%)\n 1 active+undersized+degraded\n
Run Code Online (Sandbox Code Playgroud)\n我认为它\xe2\x80\x99s因为集群\xe2\x80\x99s健康状况是health_err,但我不\xe2\x80\x99t知道如何解决它......我当前使用原始分区来构建ceph集群:一个节点上有一个分区,另一个节点上有两个分区。
\n我发现很少有pod重启了好几次,所以我检查了它们的日志。至于csi-rbdplugin-provisioner pod,csi-resizer、csi Attacher和csi-snapshotter容器中也有同样的错误:
\nE0122 08:08:37.891106 1 leaderelection.go:321] error retrieving resource lock rook-ceph/external-resizer-rook-ceph-rbd-csi-ceph-com: Get "https://10.96.0.1:443/apis/coordination.k8s.io/v1/namespaces/rook-ceph/leases/external-resizer-rook-ceph-rbd-csi-ceph-com": dial tcp 10.96.0.1:443: i/o timeout\n
Run Code Online (Sandbox Code Playgroud)\n,以及 csi-snapshotter 中的重复错误:
\nE0122 08:08:48.420082 1 reflector.go:127] github.com/kubernetes-csi/external-snapshotter/client/v3/informers/externalversions/factory.go:117: Failed to watch *v1beta1.VolumeSnapshotClass: failed to list *v1beta1.VolumeSnapshotClass: the server could not find the requested resource (get volumesnapshotclasses.snapshot.storage.k8s.io)\n
Run Code Online (Sandbox Code Playgroud)\n至于mgr pod,有一条重复记录:
\ndebug 2021-01-29T00:47:22.155+0000 7f10fdb48700 0 log_channel(cluster) log [DBG] : pgmap v28775: 1 pgs: 1 active+undersized+degraded; 0 B data, 337 MiB used, 3.2 TiB / 3.2 TiB avail; 2/6 objects degraded (33.333%)\n
Run Code Online (Sandbox Code Playgroud)\n同样奇怪的是,mon pods 的名称是 at、av 和 aw,而不是 a、b 和 c。似乎 mon pods 被删除和创建了好几次,但我不知道为什么。
\n感谢您的任何建议。
\n 归档时间: |
|
查看次数: |
7308 次 |
最近记录: |