Logstash 在 docker-container 中反复关闭

Dip*_*vda 5 docker docker-compose logstash-configuration elasticsearch-7

我使用 docker-compose 来运行 ELKB。我的主要目标是启动elasticsearch 和logstash 容器。Logstash容器应该成功与elasticsearch连接,并将日志传递给elasticsearch进行进一步的搜索或处理。

但在不知不觉中原因logstash容器应该频繁停止。我需要保留在logstash和elasticsearch容器中,但它没有发生。

我不知道是什么原因导致logstash容器反复关闭。

我使用elasticsearch:7.6.3和logstash:7.6.3

请检查下面的代码并指导我在哪里犯了错误。

docker-compose.yml

# Docker version 19.03.5
# docker-compose version 1.25.3
version: "3.7"
services:
  elasticsearch:
    container_name: elasticsearch
    build:
      context: ./elasticsearch
      dockerfile: Dockerfile
    ports:
      - 9200:9200
      - 9300:9300
    volumes:
      - ./elasticsearch/data:/usr/share/elasticsearch/data:rw
      - ./elasticsearch/logs:/usr/share/elasticsearch/logs:rw
    restart: always
    ulimits:
      memlock:
        soft: -1
        hard: -1
    networks:
      - elkb
  logstash:
    container_name: logstash
    build:
      context: ./logstash
      dockerfile: Dockerfile
    ports:
      - 9600:9600
      - 5000:5000/udp
      - 5000:5000/tcp
    volumes:
      - ./logstash/input-logs:/usr/share/logstash/logs
      - ./logstash/data:/var/lib/logstash:rw
      - ./logstash/logs:/var/logs/logstash:rw
    restart: always
    ulimits:
      memlock:
        soft: -1
        hard: -1
    networks:
      - elkb
    links:
      - elasticsearch
    depends_on:
      - elasticsearch

networks:
  elk:
    driver: bridge

volumes:
  elasticsearch:
Run Code Online (Sandbox Code Playgroud)

Elasticsearch Dockerfile

FROM docker.elastic.co/elasticsearch/elasticsearch:7.6.2
/usr/share/elasticsearch/config/elasticsearch.yml
RUN mkdir -p /var/log/elasticsearch
RUN chown -R elasticsearch:elasticsearch /var/log/elasticsearch
RUN mkdir -p /var/lib/elasticsearch
RUN chown -R elasticsearch:elasticsearch /var/lib/elasticsearch
EXPOSE 9200
EXPOSE 9300
Run Code Online (Sandbox Code Playgroud)

弹性搜索.yml

cluster.name: es_cluster
node.name: es_node_1
bootstrap.memory_lock: true
network.host: 0.0.0.0
http.port: 9200
discovery.seed_hosts: ["0.0.0.0"]
cluster.initial_master_nodes: ["es_node_1"]
Run Code Online (Sandbox Code Playgroud)

Logstash Dockerfile

FROM docker.elastic.co/logstash/logstash:7.6.2
COPY logstash.yml /usr/share/logstash/config/logstash.yml
COPY ./pipeline/logstash.conf /usr/share/logstash/pipeline/logstash.conf
EXPOSE 9600
Run Code Online (Sandbox Code Playgroud)

Logstash.yml

http.host: "0.0.0.0"
xpack.monitoring.elasticsearch.hosts: "http://elasticsearch:9200"
xpack.monitoring.enabled: true
Run Code Online (Sandbox Code Playgroud)

Logstash.conf

input{
  stdin{}
}
output{
  elasticsearch {
    hosts => ["http://elasticsearch:9200"]
  }
}
Run Code Online (Sandbox Code Playgroud)

Logstash的容器日志

container_logstash    | WARNING: An illegal reflective access operation has occurred
container_logstash    | WARNING: Illegal reflective access by com.headius.backport9.modules.Modules (file:/usr/share/logstash/logstash-core/lib/jars/jruby-complete-9.2.9.0.jar) to method sun.nio.ch.NativeThread.signal(long)
container_logstash    | WARNING: Please consider reporting this to the maintainers of com.headius.backport9.modules.Modules
container_logstash    | WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
container_logstash    | WARNING: All illegal access operations will be denied in a future release
container_logstash    | Sending Logstash logs to /usr/share/logstash/logs which is now configured via log4j2.properties
container_logstash    | [2020-04-25T14:50:33,271][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"7.6.2"}
container_logstash    | [2020-04-25T14:50:34,013][INFO ][logstash.licensechecker.licensereader] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elasticsearch:9200/]}}
container_logstash    | [2020-04-25T14:50:34,127][WARN ][logstash.licensechecker.licensereader] Restored connection to ES instance {:url=>"http://elasticsearch:9200/"}
container_logstash    | [2020-04-25T14:50:34,157][INFO ][logstash.licensechecker.licensereader] ES Output version determined {:es_version=>7}
container_logstash    | [2020-04-25T14:50:34,160][WARN ][logstash.licensechecker.licensereader] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
container_logstash    | [2020-04-25T14:50:34,243][INFO ][logstash.monitoring.internalpipelinesource] Monitoring License OK
container_logstash    | [2020-04-25T14:50:34,244][INFO ][logstash.monitoring.internalpipelinesource] Validated license for monitoring. Enabling monitoring pipeline.
container_logstash    | [2020-04-25T14:50:34,982][INFO ][org.reflections.Reflections] Reflections took 22 ms to scan 1 urls, producing 20 keys and 40 values 
container_logstash    | [2020-04-25T14:50:35,126][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elasticsearch:9200/]}}
container_logstash    | [2020-04-25T14:50:35,134][WARN ][logstash.outputs.elasticsearch][main] Restored connection to ES instance {:url=>"http://elasticsearch:9200/"}
container_logstash    | [2020-04-25T14:50:35,138][INFO ][logstash.outputs.elasticsearch][main] ES Output version determined {:es_version=>7}
container_logstash    | [2020-04-25T14:50:35,138][WARN ][logstash.outputs.elasticsearch][main] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
container_logstash    | [2020-04-25T14:50:35,159][INFO ][logstash.outputs.elasticsearch][main] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["http://elasticsearch:9200"]}
container_logstash    | [2020-04-25T14:50:35,182][INFO ][logstash.outputs.elasticsearch][main] Using default mapping template
container_logstash    | [2020-04-25T14:50:35,206][WARN ][org.logstash.instrument.metrics.gauge.LazyDelegatingGauge][main] A gauge metric of an unknown type (org.jruby.specialized.RubyArrayOneObject) has been created for key: cluster_uuids. This may result in invalid serialization.  It is recommended to log an issue to the responsible developer/development team.
container_logstash    | [2020-04-25T14:50:35,213][INFO ][logstash.javapipeline    ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>6, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>750, "pipeline.sources"=>["/usr/share/logstash/pipeline/logstash.conf"], :thread=>"#<Thread:0x27747a5a run>"}
container_logstash    | [2020-04-25T14:50:35,213][INFO ][logstash.outputs.elasticsearch][main] Attempting to install template {:manage_template=>{"index_patterns"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s", "number_of_shards"=>1, "index.lifecycle.name"=>"logstash-policy", "index.lifecycle.rollover_alias"=>"logstash"}, "mappings"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}
container_logstash    | [2020-04-25T14:50:35,711][INFO ][logstash.javapipeline    ][main] Pipeline started {"pipeline.id"=>"main"}
container_logstash    | [2020-04-25T14:50:35,738][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
container_logstash    | [2020-04-25T14:50:36,233][WARN ][logstash.outputs.elasticsearch] You are using a deprecated config setting "document_type" set in elasticsearch. Deprecated settings will continue to work, but are scheduled for removal from logstash in the future. Document types are being deprecated in Elasticsearch 6.0, and removed entirely in 7.0. You should avoid this feature If you have any questions about this, please visit the #logstash channel on freenode irc. {:name=>"document_type", :plugin=><LogStash::Outputs::ElasticSearch bulk_path=>"/_monitoring/bulk?system_id=logstash&system_api_version=7&interval=1s", hosts=>[http://elasticsearch:9200], sniffing=>false, manage_template=>false, id=>"ebdd88635541942b096027ed79be84efc3dd562a5f0e1b78fca83c7b5c9a1a7c", document_type=>"%{[@metadata][document_type]}", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_031a6e38-cafd-42f9-b689-b577ba9acc88", enable_metric=>true, charset=>"UTF-8">, workers=>1, template_name=>"logstash", template_overwrite=>false, doc_as_upsert=>false, script_type=>"inline", script_lang=>"painless", script_var_name=>"event", scripted_upsert=>false, retry_initial_interval=>2, retry_max_interval=>64, retry_on_conflict=>1, ilm_enabled=>"auto", ilm_rollover_alias=>"logstash", ilm_pattern=>"{now/d}-000001", ilm_policy=>"logstash-policy", action=>"index", ssl_certificate_verification=>true, sniffing_delay=>5, timeout=>60, pool_max=>1000, pool_max_per_route=>100, resurrect_delay=>5, validate_after_inactivity=>10000, http_compression=>false>}
container_logstash    | [2020-04-25T14:50:36,246][INFO ][logstash.outputs.elasticsearch][.monitoring-logstash] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elasticsearch:9200/]}}
container_logstash    | [2020-04-25T14:50:36,250][WARN ][logstash.outputs.elasticsearch][.monitoring-logstash] Restored connection to ES instance {:url=>"http://elasticsearch:9200/"}
container_logstash    | [2020-04-25T14:50:36,253][INFO ][logstash.outputs.elasticsearch][.monitoring-logstash] ES Output version determined {:es_version=>7}
container_logstash    | [2020-04-25T14:50:36,253][WARN ][logstash.outputs.elasticsearch][.monitoring-logstash] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
container_logstash    | [2020-04-25T14:50:36,268][INFO ][logstash.outputs.elasticsearch][.monitoring-logstash] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["http://elasticsearch:9200"]}
container_logstash    | [2020-04-25T14:50:36,271][INFO ][logstash.javapipeline    ][.monitoring-logstash] Starting pipeline {:pipeline_id=>".monitoring-logstash", "pipeline.workers"=>1, "pipeline.batch.size"=>2, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>2, "pipeline.sources"=>["monitoring pipeline"], :thread=>"#<Thread:0x6e9553e7 run>"}
container_logstash    | [2020-04-25T14:50:36,288][INFO ][logstash.javapipeline    ][.monitoring-logstash] Pipeline started {"pipeline.id"=>".monitoring-logstash"}
container_logstash    | [2020-04-25T14:50:36,294][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:".monitoring-logstash"], :non_running_pipelines=>[:main]}
container_logstash    | [2020-04-25T14:50:36,398][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
container_logstash    | [2020-04-25T14:50:37,402][INFO ][logstash.javapipeline    ] Pipeline terminated {"pipeline.id"=>".monitoring-logstash"}
container_logstash    | [2020-04-25T14:50:38,337][INFO ][logstash.runner          ] Logstash shut down.
Run Code Online (Sandbox Code Playgroud)

如果您需要更多说明或需要更多信息,请告诉我。

谢谢你的决议。

Dip*_*vda 0

@wobmene @bellackn对于我很久以前问过的上述问题的延迟回答感到抱歉。

为了解决上述问题,我使用以下配置重新配置了 ELKB。我可能没有为您提供上述答案的完全合格的理由,但我尽力了。

quinn是我用于此构建和服务的名称。

ELKB 存储库结构

elkb
    - elasticsearch
        Dockerfile
        elasticsearch.yml
    - filebeat
        Dockerfile
        filebeat.yml
    - kibana
        Dockerfile
        kibana.yml
    - logstash
        - pipeline
            logstash.conf
        Dockerfile
        logstash.yml
    docker-compose.yml
Run Code Online (Sandbox Code Playgroud)

ELKB 端口

 - elasticsearch: 9200/9300  
 - logstash: 9600  
 - kibana: 5601  
 - filbeats: 5044
Run Code Online (Sandbox Code Playgroud)

elkb/docker-compose.yml

# Docker version 19.03.5
# docker-compose version 1.25.3

version: "3.7"
services:
  elasticsearch:
    container_name: elasticsearch
    build:
      context: ./elasticsearch
      dockerfile: Dockerfile
    ports:
      - 9200:9200
      - 9300:9300
    volumes:
      - ./elasticsearch/data:/usr/share/elasticsearch/data:rw
      - ./elasticsearch/logs:/usr/share/elasticsearch/logs:rw
    restart: always
    ulimits:
      memlock:
        soft: -1
        hard: -1
    networks:
      - quinn_elkb

  quinn_logstash:
    container_name: quinn_logstash
    build:
      context: ./logstash
      dockerfile: Dockerfile
    ports:
      - 9600:9600
      - 5000:5000/udp
      - 5000:5000/tcp
    volumes:
      - ./logstash/input-logs:/usr/share/logstash/logs
      - ./logstash/data:/var/lib/logstash:rw
      - ./logstash/logs:/var/logs/logstash:rw
    restart: always
    ulimits:
      memlock:
        soft: -1
        hard: -1
    networks:
      - quinn_elkb
    links:
      - elasticsearch
    depends_on:
      - elasticsearch

  quinn_kibana:
    container_name: quinn_kibana
    build:
      context: ./kibana
      dockerfile: Dockerfile
    ports:
      - 5601:5601
    restart: always
    ulimits:
      memlock:
        soft: -1
        hard: -1
    networks:
      - quinn_elkb
    links:
      - elasticsearch
    depends_on:
      - elasticsearch

  quinn_filebeat:
    container_name: quinn_filebeat
    build:
      context: ./filebeat
      dockerfile: Dockerfile
    ports:
      - 5044:5044
    volumes:
      - ./../logs:/input-logs
    restart: always
    ulimits:
      memlock:
        soft: -1
        hard: -1
    networks:
      - quinn_elkb
    links:
      - elasticsearch
    depends_on:
      - elasticsearch

networks:
  quinn_elkb:
    driver: bridge

volumes:
  elasticsearch:
    driver: local
Run Code Online (Sandbox Code Playgroud)

elkb/elasticsearch/Dockerfile

FROM docker.elastic.co/elasticsearch/elasticsearch:7.6.2
COPY ./elasticsearch.yml /usr/share/elasticsearch/config/elasticsearch.yml
RUN mkdir -p /var/log/elasticsearch
RUN chown -R elasticsearch:elasticsearch /var/log/elasticsearch
RUN mkdir -p /var/lib/elasticsearch
RUN chown -R elasticsearch:elasticsearch /var/lib/elasticsearch
EXPOSE 9200
EXPOSE 9300
Run Code Online (Sandbox Code Playgroud)

elkb/elasticsearch/elasticsearch.yml

# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
#       Before you set out to tweak and tune the configuration, make sure you
#       understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
cluster.name: quinn_es_cluster
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
node.name: quinn_es_node_1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
# ${path.data}
#
# Path to log files:
#
# ${path.logs}
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: 0.0.0.0
#
# Set a custom port for HTTP:
#
http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
# discovery.seed_hosts: ["127.0.0.1", "[::1]", "0.0.0.0"]
discovery.seed_hosts: ["0.0.0.0"]
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
cluster.initial_master_nodes: ["quinn_es_node_1"]
#
# For more information, consult the discovery and cluster formation module documentation.
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true
Run Code Online (Sandbox Code Playgroud)

elkb/filebeat/Dockerfile

FROM docker.elastic.co/beats/filebeat:7.6.2
COPY filebeat.yml /usr/share/filebeat/filebeat.yml
USER root
RUN mkdir -p /input-logs/
# RUN chown root:filebeat /usr/share/filebeat/filebeat.yml
RUN chmod go-w /usr/share/filebeat/filebeat.yml
USER filebeat
EXPOSE 5044
Run Code Online (Sandbox Code Playgroud)

elkb/filebeat/filebeat.yml

filebeat.inputs:
  - type: log
    enabled: true
    paths:
      # here is the reference of docker directory.
      # The current directory of docker is /usr/share/filebeat
      - ../../../input-logs/**/*.log

processors:
  - add_docker_metadata: ~

reload.enabled: true
reload.period: 10s

output.logstash:
  hosts: ["quinn_logstash:5044"]

logging.json: true
logging.metrics.enabled: false
Run Code Online (Sandbox Code Playgroud)

elkb/kibana/Dockerfile

FROM docker.elastic.co/kibana/kibana:7.6.2
COPY ./kibana.yml /usr/share/kibana/config/kibana.yml
EXPOSE 5601
Run Code Online (Sandbox Code Playgroud)

elkb/kibana/kibana.yml

server.name: quinn_kibana
server.host: "0.0.0.0"
elasticsearch.hosts: ["http://elasticsearch:9200"]
xpack.monitoring.ui.container.elasticsearch.enabled: true
## X-Pack security credentials
# elasticsearch.username: elastic
# elasticsearch.password: changeme
Run Code Online (Sandbox Code Playgroud)

elkb/logstash/pipeline/logstash.conf

input {
  beats {
    port => 5044
  }
}

output {
  elasticsearch {
    hosts => "elasticsearch:9200"
    index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
  }
  stdout { codec => rubydebug }
}
Run Code Online (Sandbox Code Playgroud)

elkb/logstash/Dockerfile

FROM docker.elastic.co/logstash/logstash:7.6.2
COPY ./logstash.yml /usr/share/logstash/config/logstash.yml
COPY ./pipeline/logstash.conf /usr/share/logstash/pipeline/logstash.conf
EXPOSE 9600
Run Code Online (Sandbox Code Playgroud)

elkb/logstash/logstash.yml

http.host: "0.0.0.0"
xpack.monitoring.elasticsearch.hosts: "http://elasticsearch:9200"
xpack.monitoring.enabled: true
# xpack.monitoring.elasticsearch.username: elastic
# xpack.monitoring.elasticsearch.password: changeme
Run Code Online (Sandbox Code Playgroud)

我已阅读以下文章;都有一个很好的参考,可以帮助解决上述问题并配置 ELKB。

https://medium.com/@sece.cosmin/docker-logs-with-elastic-stack-elk-filebeat-50e2b20a27c6 https://github.com/cosminseceleanu/tutorials https://elk-docker.readthedocs.io /#先决条件 https://github.com/elastic/stack-docker/blob/master/docker-compose.yml https://github.com/elastic/elasticsearch/blob/master/distribution/docker/docker-compose。 yml http://cambio.name/index.php/node/522