在弹性搜索中刷新失败后分片未分配

bor*_*ris 5 java memory elasticsearch

有时,在 ElasticSearch v1.1.0 中插入新文档后,会出现 noshardavailableactionexception 错误。当检查时

curl  'localhost:9200/_cat/shards/cvk'
Run Code Online (Sandbox Code Playgroud)

我得到了答案:cvk 0 p UNASSIGNED

使用命令重新启动弹性后:

/etc/init.d/elasticsearch restart
Run Code Online (Sandbox Code Playgroud)

一切正常。

ES 在 VPS ubuntu v12 上运行,索引只有 1 个分片并且禁用复制。

我在日志中发现“设备上没有剩余空间”错误。但我的机器上有足够的磁盘空间。我正在批量上传 1000 个文档(每个文档大约 512 字节)。

如何解决冲水问题?如果不可能,如何通过 REST 接口重新分配分片(无需重新启动服务器)?

来自我的 vps 的 df

Filesystem      Size  Used Avail Use% Mounted on
/dev/vda         20G   13G  6.6G  65% /
udev            237M   12K  237M   1% /dev
tmpfs            50M  216K   49M   1% /run
none            5.0M     0  5.0M   0% /run/lock
none            246M     0  246M   0% /run/shm
Run Code Online (Sandbox Code Playgroud)

来自 vps 的日志显示错误

[2014-05-03 04:20:20,088][WARN ][index.translog           ] [Molecule Man] [cvk][0] failed to flush shard on translog threshold
org.elasticsearch.index.engine.FlushFailedEngineException: [cvk][0] Flush failed
        at org.elasticsearch.index.engine.internal.InternalEngine.flush(InternalEngine.java:829)
        at org.elasticsearch.index.shard.service.InternalIndexShard.flush(InternalIndexShard.java:589)
        at org.elasticsearch.index.translog.TranslogService$TranslogBasedFlush$1.run(TranslogService.java:194)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: No space left on device
        at java.io.RandomAccessFile.writeBytes0(Native Method)
        at java.io.RandomAccessFile.writeBytes(RandomAccessFile.java:520)
        at java.io.RandomAccessFile.write(RandomAccessFile.java:550)
        at org.apache.lucene.store.FSDirectory$FSIndexOutput.flushBuffer(FSDirectory.java:452)
        at org.apache.lucene.store.BufferedChecksumIndexOutput.flushBuffer(BufferedChecksumIndexOutput.java:71)
        at org.apache.lucene.store.BufferedIndexOutput.flushBuffer(BufferedIndexOutput.java:113)
        at org.apache.lucene.store.BufferedIndexOutput.flush(BufferedIndexOutput.java:102)
        at org.apache.lucene.store.BufferedChecksumIndexOutput.flush(BufferedChecksumIndexOutput.java:86)
        at org.apache.lucene.store.BufferedIndexOutput.writeBytes(BufferedIndexOutput.java:92)
        at org.elasticsearch.index.store.Store$StoreIndexOutput.writeBytes(Store.java:634)
        at org.apache.lucene.store.DataOutput.writeBytes(DataOutput.java:52)
        at org.apache.lucene.store.RAMOutputStream.writeTo(RAMOutputStream.java:65)
        at org.apache.lucene.codecs.BlockTreeTermsWriter$TermsWriter.writeBlock(BlockTreeTermsWriter.java:970)
        at org.apache.lucene.codecs.BlockTreeTermsWriter$TermsWriter.writeBlocks(BlockTreeTermsWriter.java:579)
        at org.apache.lucene.codecs.BlockTreeTermsWriter$TermsWriter$FindBlocks.freeze(BlockTreeTermsWriter.java:555)
        at org.apache.lucene.util.fst.Builder.freezeTail(Builder.java:214)
        at org.apache.lucene.util.fst.Builder.add(Builder.java:394)
        at org.apache.lucene.codecs.BlockTreeTermsWriter$TermsWriter.finishTerm(BlockTreeTermsWriter.java:1047)
        at org.apache.lucene.index.FreqProxTermsWriterPerField.flush(FreqProxTermsWriterPerField.java:548)
        at org.apache.lucene.index.FreqProxTermsWriter.flush(FreqProxTermsWriter.java:85)
        at org.apache.lucene.index.TermsHash.flush(TermsHash.java:116)
        at org.apache.lucene.index.DocInverter.flush(DocInverter.java:53)
        at org.apache.lucene.index.DocFieldProcessor.flush(DocFieldProcessor.java:81)
        at org.apache.lucene.index.DocumentsWriterPerThread.flush(DocumentsWriterPerThread.java:465)
        at org.apache.lucene.index.DocumentsWriter.doFlush(DocumentsWriter.java:506)
        at org.apache.lucene.index.DocumentsWriter.flushAllThreads(DocumentsWriter.java:616)
        at org.apache.lucene.index.IndexWriter.prepareCommitInternal(IndexWriter.java:2864)
        at org.apache.lucene.index.IndexWriter.commitInternal(IndexWriter.java:3022)
        at org.apache.lucene.index.IndexWriter.commit(IndexWriter.java:2989)
        at org.elasticsearch.index.engine.internal.InternalEngine.flush(InternalEngine.java:812)
        ... 5 more
[2014-05-03 04:20:20,321][WARN ][index.merge.scheduler    ] [Molecule Man] [cvk][0] failed to merge
java.io.IOException: No space left on device
        at java.io.RandomAccessFile.writeBytes0(Native Method)
        at java.io.RandomAccessFile.writeBytes(RandomAccessFile.java:520)
        at java.io.RandomAccessFile.write(RandomAccessFile.java:550)
        at org.apache.lucene.store.FSDirectory$FSIndexOutput.flushBuffer(FSDirectory.java:452)
        at org.apache.lucene.store.RateLimitedFSDirectory$RateLimitedIndexOutput.flushBuffer(RateLimitedFSDirectory.java:102)
        at org.apache.lucene.store.BufferedChecksumIndexOutput.flushBuffer(BufferedChecksumIndexOutput.java:71)
        at org.apache.lucene.store.BufferedIndexOutput.flushBuffer(BufferedIndexOutput.java:113)
        at org.apache.lucene.store.BufferedIndexOutput.flush(BufferedIndexOutput.java:102)
        at org.apache.lucene.store.BufferedChecksumIndexOutput.flush(BufferedChecksumIndexOutput.java:86)
        at org.apache.lucene.store.BufferedIndexOutput.writeBytes(BufferedIndexOutput.java:92)
        at org.elasticsearch.index.store.Store$StoreIndexOutput.writeBytes(Store.java:634)
        at org.apache.lucene.store.DataOutput.writeBytes(DataOutput.java:52)
        at org.apache.lucene.store.RAMOutputStream.writeTo(RAMOutputStream.java:65)
        at org.apache.lucene.codecs.BlockTreeTermsWriter$TermsWriter.writeBlock(BlockTreeTermsWriter.java:980)
        at org.apache.lucene.codecs.BlockTreeTermsWriter$TermsWriter.writeBlocks(BlockTreeTermsWriter.java:767)
        at org.apache.lucene.codecs.BlockTreeTermsWriter$TermsWriter$FindBlocks.freeze(BlockTreeTermsWriter.java:555)
        at org.apache.lucene.util.fst.Builder.freezeTail(Builder.java:214)
        at org.apache.lucene.util.fst.Builder.add(Builder.java:394)
        at org.apache.lucene.codecs.BlockTreeTermsWriter$TermsWriter.finishTerm(BlockTreeTermsWriter.java:1047)
        at org.elasticsearch.index.codec.postingsformat.BloomFilterPostingsFormat$WrappedTermsConsumer.finishTerm(BloomFilterPostingsFormat.java:439)
        at org.apache.lucene.codecs.TermsConsumer.merge(TermsConsumer.java:112)
        at org.apache.lucene.codecs.FieldsConsumer.merge(FieldsConsumer.java:72)
        at org.apache.lucene.index.SegmentMerger.mergeTerms(SegmentMerger.java:383)
        at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:106)
        at org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4119)
        at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3716)
        at org.apache.lucene.index.TrackingSerialMergeScheduler.merge(TrackingSerialMergeScheduler.java:122)
        at org.elasticsearch.index.merge.scheduler.SerialMergeSchedulerProvider$CustomSerialMergeScheduler.merge(SerialMergeSchedulerProvider.java:89)
        at org.elasticsearch.index.merge.EnableMergeScheduler.merge(EnableMergeScheduler.java:71)
        at org.apache.lucene.index.IndexWriter.maybeMerge(IndexWriter.java:1936)
        at org.apache.lucene.index.IndexWriter.maybeMerge(IndexWriter.java:1930)
        at org.elasticsearch.index.merge.Merges.maybeMerge(Merges.java:47)
        at org.elasticsearch.index.engine.internal.InternalEngine.maybeMerge(InternalEngine.java:926)
        at org.elasticsearch.index.shard.service.InternalIndexShard$EngineMerger$1.run(InternalIndexShard.java:966)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
[2014-05-03 04:20:20,382][WARN ][index.engine.internal    ] [Molecule Man] [cvk][0] failed engine
org.apache.lucene.index.MergePolicy$MergeException: java.io.IOException: No space left on device
        at org.elasticsearch.index.merge.scheduler.SerialMergeSchedulerProvider$CustomSerialMergeScheduler.merge(SerialMergeSchedulerProvider.java:92)
        at org.elasticsearch.index.merge.EnableMergeScheduler.merge(EnableMergeScheduler.java:71)
        at org.apache.lucene.index.IndexWriter.maybeMerge(IndexWriter.java:1936)
        at org.apache.lucene.index.IndexWriter.maybeMerge(IndexWriter.java:1930)
        at org.elasticsearch.index.merge.Merges.maybeMerge(Merges.java:47)
        at org.elasticsearch.index.engine.internal.InternalEngine.maybeMerge(InternalEngine.java:926)
        at org.elasticsearch.index.shard.service.InternalIndexShard$EngineMerger$1.run(InternalIndexShard.java:966)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: No space left on device
        at java.io.RandomAccessFile.writeBytes0(Native Method)
        at java.io.RandomAccessFile.writeBytes(RandomAccessFile.java:520)
        at java.io.RandomAccessFile.write(RandomAccessFile.java:550)
        at org.apache.lucene.store.FSDirectory$FSIndexOutput.flushBuffer(FSDirectory.java:452)
        at org.apache.lucene.store.RateLimitedFSDirectory$RateLimitedIndexOutput.flushBuffer(RateLimitedFSDirectory.java:102)
        at org.apache.lucene.store.BufferedChecksumIndexOutput.flushBuffer(BufferedChecksumIndexOutput.java:71)
        at org.apache.lucene.store.BufferedIndexOutput.flushBuffer(BufferedIndexOutput.java:113)
        at org.apache.lucene.store.BufferedIndexOutput.flush(BufferedIndexOutput.java:102)
        at org.apache.lucene.store.BufferedChecksumIndexOutput.flush(BufferedChecksumIndexOutput.java:86)
        at org.apache.lucene.store.BufferedIndexOutput.writeBytes(BufferedIndexOutput.java:92)
        at org.elasticsearch.index.store.Store$StoreIndexOutput.writeBytes(Store.java:634)
        at org.apache.lucene.store.DataOutput.writeBytes(DataOutput.java:52)
        at org.apache.lucene.store.RAMOutputStream.writeTo(RAMOutputStream.java:65)
        at org.apache.lucene.codecs.BlockTreeTermsWriter$TermsWriter.writeBlock(BlockTreeTermsWriter.java:980)
        at org.apache.lucene.codecs.BlockTreeTermsWriter$TermsWriter.writeBlocks(BlockTreeTermsWriter.java:767)
        at org.apache.lucene.codecs.BlockTreeTermsWriter$TermsWriter$FindBlocks.freeze(BlockTreeTermsWriter.java:555)
        at org.apache.lucene.util.fst.Builder.freezeTail(Builder.java:214)
        at org.apache.lucene.util.fst.Builder.add(Builder.java:394)
        at org.apache.lucene.codecs.BlockTreeTermsWriter$TermsWriter.finishTerm(BlockTreeTermsWriter.java:1047)
        at org.elasticsearch.index.codec.postingsformat.BloomFilterPostingsFormat$WrappedTermsConsumer.finishTerm(BloomFilterPostingsFormat.java:439)
        at org.apache.lucene.codecs.TermsConsumer.merge(TermsConsumer.java:112)
        at org.apache.lucene.codecs.FieldsConsumer.merge(FieldsConsumer.java:72)
        at org.apache.lucene.index.SegmentMerger.mergeTerms(SegmentMerger.java:383)
        at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:106)
        at org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4119)
        at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3716)
        at org.apache.lucene.index.TrackingSerialMergeScheduler.merge(TrackingSerialMergeScheduler.java:122)
        at org.elasticsearch.index.merge.scheduler.SerialMergeSchedulerProvider$CustomSerialMergeScheduler.merge(SerialMergeSchedulerProvider.java:89)
        ... 9 more
[2014-05-03 04:20:20,490][DEBUG][action.bulk              ] [Molecule Man] [cvk][0] failed to execute bulk item (index) index {[cvk][public][22017747], source[{"public":"22017747","name":"Private community | VK","desc":"\"\"","vol":0,"priv":null,"org":null,"phone":null,"email":null,"url":"5ghj6","wall":1,"post":null,"like":null,"share":null}]}
org.elasticsearch.index.engine.IndexFailedEngineException: [cvk][0] Index failed for [public#22017747]
        at org.elasticsearch.index.engine.internal.InternalEngine.index(InternalEngine.java:483)
        at org.elasticsearch.index.shard.service.InternalIndexShard.index(InternalIndexShard.java:396)
        at org.elasticsearch.action.bulk.TransportShardBulkAction.shardIndexOperation(TransportShardBulkAction.java:401)
        at org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnPrimary(TransportShardBulkAction.java:157)
        at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction.performOnPrimary(TransportShardReplicationOperationAction.java:556)
        at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction$1.run(TransportShardReplicationOperationAction.java:426)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.lucene.store.AlreadyClosedException: this IndexWriter is closed
        at org.apache.lucene.index.IndexWriter.ensureOpen(IndexWriter.java:645)
        at org.apache.lucene.index.IndexWriter.ensureOpen(IndexWriter.java:659)
        at org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1525)
        at org.elasticsearch.index.engine.internal.InternalEngine.innerIndex(InternalEngine.java:532)
        at org.elasticsearch.index.engine.internal.InternalEngine.index(InternalEngine.java:470)
        ... 8 more
[2014-05-03 04:20:20,493][DEBUG][action.bulk              ] [Molecule Man] [cvk][0], node[Sk1Eoi84TDW9anq_zQsNJg], [P], s[STARTED]: Failed to execute [org.elasticsearch.action.bulk.BulkShardRequest@61204bff]
java.lang.NullPointerException
        at org.elasticsearch.action.bulk.TransportShardBulkAction.applyVersion(TransportShardBulkAction.java:617)
        at org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnPrimary(TransportShardBulkAction.java:178)
        at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction.performOnPrimary(TransportShardReplicationOperationAction.java:556)
        at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction$1.run(TransportShardReplicationOperationAction.java:426)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
[2014-05-03 04:20:20,534][WARN ][cluster.action.shard     ] [Molecule Man] [cvk][0] sending failed shard for [cvk][0], node[Sk1Eoi84TDW9anq_zQsNJg], [P], s[STARTED], indexUUID [m0nqEEqXQu-rHc5ipn4ZPA], reason [engine failure, message [MergeException[java.io.IOException: No space left on device]; nested: IOException[No space left on device]; ]]
[2014-05-03 04:20:20,534][WARN ][cluster.action.shard     ] [Molecule Man] [cvk][0] received shard failed for [cvk][0], node[Sk1Eoi84TDW9anq_zQsNJg], [P], s[STARTED], indexUUID [m0nqEEqXQu-rHc5ipn4ZPA], reason [engine failure, message [MergeException[java.io.IOException: No space left on device]; nested: IOException[No space left on device]; ]]
Run Code Online (Sandbox Code Playgroud)

节点信息

indices: {
  docs: {
    count: 4439439
    deleted: 0
  }
  store: {
    size_in_bytes: 643890465
    throttle_time_in_millis: 0
  }
  indexing: {
    index_total: 2214686
    index_time_in_millis: 1679906
    index_current: 1
    delete_total: 0
    delete_time_in_millis: 0
    delete_current: 0
  }
  get: {
    total: 0
    time_in_millis: 0
    exists_total: 0
    exists_time_in_millis: 0
    missing_total: 0
    missing_time_in_millis: 0
    current: 0
  }
  search: {
    open_contexts: 0
    query_total: 0
    query_time_in_millis: 0
    query_current: 0
    fetch_total: 0
    fetch_time_in_millis: 0
    fetch_current: 0
  }
  merges: {
    current: 0
    current_docs: 0
    current_size_in_bytes: 0
    total: 23
    total_time_in_millis: 1081333
    total_docs: 15716810
    total_size_in_bytes: 5938832547
  }
  refresh: {
    total: 8
    total_time_in_millis: 0
  }
  flush: {
    total: 202
    total_time_in_millis: 677609
  }
  warmer: {
    current: 0
    total: 2
    total_time_in_millis: 15
  }
  filter_cache: {
    memory_size_in_bytes: 0
    evictions: 0
  }
  id_cache: {
    memory_size_in_bytes: 0
  }
  fielddata: {
    memory_size_in_bytes: 0
    evictions: 0
  }
  percolate: {
    total: 0
    time_in_millis: 0
    current: 0
    memory_size_in_bytes: -1
    memory_size: -1b
    queries: 0
  }
  completion: {
    size_in_bytes: 0
  }
  segments: {
    count: 18
    memory_in_bytes: 38866707
  }
  translog: {
    operations: 0
    size_in_bytes: 0
  }
}
os: {
  timestamp: 1399114654034
  uptime_in_millis: 701756
  load_average: [
    0
    0.01
    0.05
  ]
  cpu: {
    sys: 0
    user: 0
    idle: 99
    usage: 0
    stolen: 0
  }
  mem: {
    free_in_bytes: 34357248
    used_in_bytes: 480374784
    free_percent: 33
    used_percent: 66
    actual_free_in_bytes: 172974080
    actual_used_in_bytes: 341757952
  }
  swap: {
    used_in_bytes: 0
    free_in_bytes: 0
  }
}
process: {
  timestamp: 1399114654035
  open_file_descriptors: 103
  cpu: {
    percent: 0
    sys_in_millis: 118480
    user_in_millis: 2057680
    total_in_millis: 2176160
  }
  mem: {
    resident_in_bytes: 263897088
    share_in_bytes: 6635520
    total_virtual_in_bytes: 1609924608
  }
}
jvm: {
  timestamp: 1399114654035
  uptime_in_millis: 43582377
  mem: {
    heap_used_in_bytes: 80238424
    heap_used_percent: 52
    heap_committed_in_bytes: 152043520
    heap_max_in_bytes: 152043520
    non_heap_used_in_bytes: 42873536
    non_heap_committed_in_bytes: 66764800
    pools: {
      young: {
        used_in_bytes: 15877936
        max_in_bytes: 41943040
        peak_used_in_bytes: 41943040
        peak_max_in_bytes: 41943040
      }
      survivor: {
        used_in_bytes: 1463048
        max_in_bytes: 5242880
        peak_used_in_bytes: 5242880
        peak_max_in_bytes: 5242880
      }
      old: {
        used_in_bytes: 62897440
        max_in_bytes: 104857600
        peak_used_in_bytes: 104857600
        peak_max_in_bytes: 104857600
      }
    }
  }
  threads: {
    count: 36
    peak_count: 40
  }
  gc: {
    collectors: {
      young: {
        collection_count: 7359
        collection_time_in_millis: 116960
      }
      old: {
        collection_count: 2693
        collection_time_in_millis: 131864
      }
    }
  }
  buffer_pools: {
    direct: {
      count: 16
      used_in_bytes: 2694367
      total_capacity_in_bytes: 2694367
    }
    mapped: {
      count: 83
      used_in_bytes: 635281868
      total_capacity_in_bytes: 635281868
    }
  }
}
thread_pool: {
  generic: {
    threads: 2
    queue: 0
    active: 0
    rejected: 0
    largest: 6
    completed: 9045
  }
  index: {
    threads: 0
    queue: 0
    active: 0
    rejected: 0
    largest: 0
    completed: 0
  }
  get: {
    threads: 0
    queue: 0
    active: 0
    rejected: 0
    largest: 0
    completed: 0
  }
  snapshot: {
    threads: 1
    queue: 0
    active: 0
    rejected: 0
    largest: 1
    completed: 442
  }
  merge: {
    threads: 1
    queue: 0
    active: 0
    rejected: 0
    largest: 1
    completed: 1297
  }
  suggest: {
    threads: 0
    queue: 0
    active: 0
    rejected: 0
    largest: 0
    completed: 0
  }
  bulk: {
    threads: 1
    queue: 0
    active: 0
    rejected: 0
    largest: 1
    completed: 2213
  }
  optimize: {
    threads: 0
    queue: 0
    act

Joh*_*one 2

首先,您似乎正在运行 Elasticsearch,并在根分区上创建索引:

Filesystem      Size  Used Avail Use% Mounted on
/dev/vda         20G   13G  6.6G  65% /
udev            237M   12K  237M   1% /dev
tmpfs            50M  216K   49M   1% /run
none            5.0M     0  5.0M   0% /run/lock
none            246M     0  246M   0% /run/shm
Run Code Online (Sandbox Code Playgroud)

如果您有能力安装另一个驱动器,通常这不是最好的主意。

您在 Lucene 索引段合并期间失败,这通常需要大量可用空间。在只有 20G 的非常小的分区上,磁盘空间使用率已经达到 65%,您很容易就会耗尽空间,特别是因为您同时与所有其他进程的磁盘需求竞争。这里有关于管理和配置 Elasticsearch 合并策略的更多详细信息:

http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/index-modules-merge.html

您可能无法可靠地索引和管理 20GB 分区(同时也是根分区)上的 9GB 数据,特别是当您大量更改数据时。您可以尝试将其设置为避免/减少段合并,这有助于节省磁盘空间,但这仍然可能不起作用。

至于为什么它占用如此多的空间,这取决于您如何映射数据,但一般来说,Elasticsearch 默认以原始形式存储所有数据的副本,以及每个单独字段的所有索引。

如果您真的非常需要适应 20GB 系统,我会仔细查看您的映射,看看哪些字段可以不索引或不存储 -

http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/mapping-source-field.html http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/mapping-all -field.html