Rag*_*har 15 node.js elasticsearch
在顺利工作了 10 多个月之后,我在进行简单的搜索查询时突然开始在生产中出现此错误。
{
"error" : {
"root_cause" : [
{
"type" : "circuit_breaking_exception",
"reason" : "[parent] Data too large, data for [<http_request>] would be [745522124/710.9mb], which is larger than the limit of [745517875/710.9mb]",
"bytes_wanted" : 745522124,
"bytes_limit" : 745517875
}
],
"type" : "circuit_breaking_exception",
"reason" : "[parent] Data too large, data for [<http_request>] would be [745522124/710.9mb], which is larger than the limit of [745517875/710.9mb]",
"bytes_wanted" : 745522124,
"bytes_limit" : 745517875
},
"status" : 503
}
Run Code Online (Sandbox Code Playgroud)
最初,我在执行简单术语查询时遇到此错误,当我收到此circuit_breaking_exception错误时,为了调试此错误,我在 elasticsearch 集群上尝试了 _cat/health 查询,但仍然出现相同的错误,即使是最简单的查询localhost:9200也会给出相同的结果错误 不确定集群突然发生了什么。她是我的断路器状态:
"breakers" : {
"request" : {
"limit_size_in_bytes" : 639015321,
"limit_size" : "609.4mb",
"estimated_size_in_bytes" : 0,
"estimated_size" : "0b",
"overhead" : 1.0,
"tripped" : 0
},
"fielddata" : {
"limit_size_in_bytes" : 639015321,
"limit_size" : "609.4mb",
"estimated_size_in_bytes" : 406826332,
"estimated_size" : "387.9mb",
"overhead" : 1.03,
"tripped" : 0
},
"in_flight_requests" : {
"limit_size_in_bytes" : 1065025536,
"limit_size" : "1015.6mb",
"estimated_size_in_bytes" : 560,
"estimated_size" : "560b",
"overhead" : 1.0,
"tripped" : 0
},
"accounting" : {
"limit_size_in_bytes" : 1065025536,
"limit_size" : "1015.6mb",
"estimated_size_in_bytes" : 146387859,
"estimated_size" : "139.6mb",
"overhead" : 1.0,
"tripped" : 0
},
"parent" : {
"limit_size_in_bytes" : 745517875,
"limit_size" : "710.9mb",
"estimated_size_in_bytes" : 553214751,
"estimated_size" : "527.5mb",
"overhead" : 1.0,
"tripped" : 0
}
}
Run Code Online (Sandbox Code Playgroud)
我在Github Issue中发现了一个类似的问题,建议增加断路器内存或禁用它。但我不确定该选择什么。请帮忙!
弹性搜索 6.3 版
Rag*_*har 27
最后经过一些研究,我找到了一个解决方案,即
有关详细信息,请检查此以了解生产环境中 Elasticsearch 的良好 JVM 内存配置,Heap: Sizing and Swapping
| 归档时间: |
|
| 查看次数: |
24761 次 |
| 最近记录: |