我有一个带有 6 个副本集的 mongo 集群。5个可以,一个不行。每个副本集有三个成员。这是rs.status()
它的原因:
{
"set" : "rs_5",
"date" : ISODate("2015-12-16T02:37:39Z"),
"myState" : 5,
"members" : [
{
"_id" : 0,
"name" : "mongo_rs_5_member_1:27018",
"health" : 1,
"state" : 5,
"stateStr" : "STARTUP2",
"uptime" : 33600,
"optime" : Timestamp(0, 0),
"optimeDate" : ISODate("1970-01-01T00:00:00Z"),
"lastHeartbeat" : ISODate("2015-12-16T02:37:38Z"),
"lastHeartbeatRecv" : ISODate("2015-12-16T02:37:37Z"),
"pingMs" : 0,
"lastHeartbeatMessage" : "initial sync need a member to be primary or secondary to do our initial sync"
},
{
"_id" : 1,
"name" : "mongo_rs_5_member_2:27019", …
Run Code Online (Sandbox Code Playgroud) 我设置了 3 个分片,但容量不足,所以我又添加了 3 个分片。(每个分片都是一个副本集)。但是数据并没有均匀地分布在集群中。我的 chunkSize 设置为标准的 64mb:
mongos> db.settings.find( { _id:"chunksize" } )
{ "_id" : "chunksize", "value" : 64 }
Run Code Online (Sandbox Code Playgroud)
我认为这意味着当一个块达到 64mb 时,它会分成两个大小为 32mb 的相等块。这就是这里所展示的。那不正确吗?
这是我的分片分布:
mongos> db.accounts.getShardDistribution()
Shard rs_0 at rs_0/mongo_rs_0_member_1:27018,mongo_rs_0_member_2:27019,mongo_rs_0_member_3:27020
data : 137.62GiB docs : 41991598 chunks : 1882
estimated data per chunk : 74.88MiB
estimated docs per chunk : 22312
Shard rs_1 at rs_1/mongo_rs_1_member_1:27018,mongo_rs_1_member_2:27019,mongo_rs_1_member_3:27020
data : 135.2GiB docs : 41159069 chunks : 1882
estimated data per chunk : 73.56MiB
estimated docs per chunk …
Run Code Online (Sandbox Code Playgroud)