Boa*_*ler 6 mongodb kubernetes google-kubernetes-engine
在 Google Container Engines Kubernetes 中,我有 3 个节点,每个节点都有 3.75 GB 的内存
\n\n现在我还有一个从单个端点调用的 api。这个端点可以像这样在 mongodb 中进行批量插入。
\n\nIMongoCollection<T> stageCollection = Database.GetCollection<T>(StageName);\n\nforeach (var batch in entites.Batch(1000))\n{\n await stageCollection.InsertManyAsync(batch);\n}\n
Run Code Online (Sandbox Code Playgroud)\n\n现在这种情况经常发生,然后我们就会陷入内存不足的情况。
\n\n一方面,我们将wiredTigerCacheSizeGB限制为1.5,另一方面,我们定义了pod的资源限制。
\n\n但结果仍然相同。\n对我来说,mongodb 似乎不知道节点 Pod 的内存限制。\n这是一个已知问题吗?如何处理它,而不扩展到“怪物”引擎?
\n\n配置 yaml 如下所示:
\n\n---\nkind: StorageClass\napiVersion: storage.k8s.io/v1\nmetadata:\n name: fast\nprovisioner: kubernetes.io/gce-pd\nparameters:\n type: pd-ssd\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: mongo\n labels:\n name: mongo\nspec:\n ports:\n - port: 27017\n targetPort: 27017\n clusterIP: None\n selector:\n role: mongo\n---\napiVersion: apps/v1beta1\nkind: StatefulSet\nmetadata:\n name: mongo\nspec:\n serviceName: "mongo"\n replicas: 1\n template:\n metadata:\n labels:\n role: mongo\n environment: test\n spec:\n terminationGracePeriodSeconds: 10\n containers:\n - name: mongo\n image: mongo:3.6\n command:\n - mongod\n - "--replSet"\n - rs0\n - "--bind_ip"\n - "0.0.0.0"\n - "--noprealloc"\n - "--wiredTigerCacheSizeGB"\n - "1.5"\n resources:\n limits:\n memory: "2Gi"\n ports:\n - containerPort: 27017\n volumeMounts:\n - name: mongo-persistent-storage\n mountPath: /data/db\n - name: mongo-sidecar\n image: cvallance/mongo-k8s-sidecar\n env:\n - name: MONGO_SIDECAR_POD_LABELS\n value: "role=mongo,environment=test"\n volumeClaimTemplates:\n - metadata:\n name: mongo-persistent-storage\n annotations:\n volume.beta.kubernetes.io/storage-class: "fast"\n spec:\n accessModes: [ "ReadWriteOnce" ]\n resources:\n requests:\n storage: 32Gi\n
Run Code Online (Sandbox Code Playgroud)\n\n更新
\n\n同时我还配置了 pod 反亲和性,以确保在运行 mongo db 的节点上我们不会对 ram 产生任何干扰。但我们仍然得到了 oom 场景 \xe2\x80\x93
\n 归档时间: |
|
查看次数: |
1666 次 |
最近记录: |