Kibana 服务器还没有准备好

Moo*_*rse 22 rhel elasticsearch kibana

我刚刚在 RHEL 8 上安装了 Kibana 7.3。Kibana 服务处于活动状态(正在运行)。当我 curl 到http://localhost:5601
收到Kibana server is not ready yet消息。我的 Elasticsearch 实例在另一台服务器上,它成功响应我的请求。我已经用那个更新了 kibana.yml

elasticsearch.hosts:[" http://EXTERNAL-IP-ADDRESS-OF-ES:9200 "]

我可以从互联网上访问elasticsearch并回复:

{
  "name" : "ip-172-31-21-240.ec2.internal",
  "cluster_name" : "elasticsearch",
  "cluster_uuid" : "y4UjlddiQimGRh29TVZoeA",
  "version" : {
    "number" : "7.3.1",
    "build_flavor" : "default",
    "build_type" : "rpm",
    "build_hash" : "4749ba6",
    "build_date" : "2019-08-19T20:19:25.651794Z",
    "build_snapshot" : false,
    "lucene_version" : "8.1.0",
    "minimum_wire_compatibility_version" : "6.8.0",
    "minimum_index_compatibility_version" : "6.0.0-beta1"
  },
  "tagline" : "You Know, for Search"
}
Run Code Online (Sandbox Code Playgroud)

的结果sudo systemctl status kibana

? kibana.service - Kibana
   Loaded: loaded (/etc/systemd/system/kibana.service; enabled; vendor preset: disabled)
   Active: active (running) since Thu 2019-09-19 12:22:34 UTC; 24min ago
 Main PID: 4912 (node)
    Tasks: 21 (limit: 4998)
   Memory: 368.8M
   CGroup: /system.slice/kibana.service
           ??4912 /usr/share/kibana/bin/../node/bin/node --no-warnings --max-http-header-size>

Sep 19 12:46:42 ip-172-31-88-39.ec2.internal kibana[4912]: {"type":"log","@timestamp":"2019-0>
Sep 19 12:46:42 ip-172-31-88-39.ec2.internal kibana[4912]: {"type":"log","@timestamp":"2019-0>
Sep 19 12:46:43 ip-172-31-88-39.ec2.internal kibana[4912]: {"type":"log","@timestamp":"2019-0>
Sep 19 12:46:43 ip-172-31-88-39.ec2.internal kibana[4912]: {"type":"log","@timestamp":"2019-0>
Sep 19 12:46:43 ip-172-31-88-39.ec2.internal kibana[4912]: {"type":"log","@timestamp":"2019-0>
Sep 19 12:46:44 ip-172-31-88-39.ec2.internal kibana[4912]: {"type":"log","@timestamp":"2019-0
Run Code Online (Sandbox Code Playgroud)

“sudo journalctl --unit kibana”的结果

Sep 19 06:03:53 ip-172-31-88-39.ec2.internal kibana[1356]: {"type":"log","@timestamp":"2019-09-19T06:03:53Z","tags":["warning","elasticsearch","admin"],"pid":1356,"message":"Unable to revive >
Sep 19 06:03:53 ip-172-31-88-39.ec2.internal kibana[1356]: {"type":"log","@timestamp":"2019-09-19T06:03:53Z","tags":["warning","elasticsearch","admin"],"pid":1356,"message":"No living connect>
Sep 19 06:03:53 ip-172-31-88-39.ec2.internal kibana[1356]: {"type":"log","@timestamp":"2019-09-19T06:03:53Z","tags":["warning","task_manager"],"pid":1356,"message":"PollError No Living connec>
Sep 19 06:03:53 ip-172-31-88-39.ec2.internal kibana[1356]: {"type":"log","@timestamp":"2019-09-19T06:03:53Z","tags":["warning","elasticsearch","admin"],"pid":1356,"message":"Unable to revive >
Sep 19 06:03:53 ip-172-31-88-39.ec2.internal kibana[1356]: {"type":"log","@timestamp":"2019-09-19T06:03:53Z","tags":["warning","elasticsearch","admin"],"pid":1356,"message":"No living connect>
Run Code Online (Sandbox Code Playgroud)

你知道问题出在哪里吗?

kar*_*ivi 16

当我将 Elasticsearch 从 v6 升级到 v7 时,我曾经遇到过同样的问题。

删除.kibana*索引解决了这个问题:

curl --request DELETE 'https://elastic-search-host:9200/.kibana*'
Run Code Online (Sandbox Code Playgroud)

  • 本地设置elastic就像经历7层地狱 (5认同)
  • 删除 `.kibana*` 到底有什么作用?失去了什么? (3认同)
  • 这个 .kibana* 索引位于哪里? (2认同)

thi*_*a92 8

可能不是这个问题的解决方案

在我的情况下,kibana 和 elasticsearch 的版本不兼容
我如何使用 docker,我只是重新创建了两者,但使用相同的版本 (7.5.1)

https://www.elastic.co/support/matrix#matrix_compatibility

  • 事实上,这就是我今天的案例的解决方案。错误消息`sudo Journalctl --unit kibana | tail -1` 是“...此版本的 Kibana (v7.6.1) 与集群中的以下 Elasticsearch 节点不兼容:v6.8.1 @ <IP>:9200 (<IP>)` (3认同)

小智 6

该错误可能与elastic.hosts设置有关。以下步骤对我有用:

  1. 打开/etc/elasticsearch/elasticsearch.yml文件并检查设置:

#network.host: localhost

2.打开/etc/kibana/kibana.yml文件并检查设置并检查:

#elasticsearch.hosts: ["http://localhost:9200"]

  1. 检查两条线路的设置是否相同。如果您为 elasticsearch 网络主机使用 IP 地址,则需要为 kibana 应用相同的 IP 地址。

问题是 kibana 无法在本地访问 elasticsearch。


Tou*_*dia 5

问题是 kibana 无法在本地访问 elasticsearch。我认为您已通过添加新行在elasticsearch.yml 中启用了xpack.security插件:

xpack.security.enabled : true

如果是这样,您需要取消注释 kibana.yml 上的两行:#elasticsearch.username & #elasticsearch.password 并设置

elasticsearch.username = kibana elasticsearch.password = your-password

之后保存更改并重新启动 kibana 服务: sudo systemctl restart kibana.service


dıl*_*ücü 5

执行那个

curl -XDELETE http://localhost:9200/*kibana*
Run Code Online (Sandbox Code Playgroud)

并重启kibana服务

service kibana restart
Run Code Online (Sandbox Code Playgroud)