我正在使用 Fluentd 和 Elasticsearch 来获取来自 Kubernetes 的日志,但我注意到某些 JSON 日志无法正确索引,因为 JSON 存储为字符串。
kubectl 日志中的日志如下所示:
{"timestamp":"2016-11-03T15:48:12.007Z","level":"INFO","thread":"cromwell-system-akka.actor.default-dispatcher-4","logger":"akka.event.slf4j.Slf4jLogger","message":"Slf4jLogger started","context":"default"}
但是保存在 /var/log/containers/... 文件中的日志已经转义了引号并使它们成为字符串而不是破坏索引的 JSON:
{"log":"{\"timestamp\":\"2016-11-03T15:45:07.976Z\",\"level\":\"INFO\",\"thread\":\"cromwell-system-akka.actor.default-dispatcher-4\",\"logger\":\"akka.event.slf4j.Slf4jLogger\",\"message\":\"Slf4jLogger started\",\"context\":\"default\"}\n","stream":"stdout","time":"2016-11-03T15:45:07.995443479Z"}
我试图让日志看起来像:
{
    "log": {
        "timestamp": "2016-11-03T15:45:07.976Z",
        "level": "INFO",
        "thread": "cromwell-system-akka.actor.default-dispatcher-4",
        "logger": "akka.event.slf4j.Slf4jLogger",
        "message": "Slf4jLogger started",
        "context": "default"
    },
    "stream": "stdout",
    "time": "2016-11-03T15: 45: 07.995443479Z"
}
你能建议我怎么做吗?
启动 Fluentd docker 容器时有 0 条错误消息,因此很难调试。
来自 fluentd-container 的curl http://elasticsearch:9200/_cat/indices显示索引,但不显示 fluentd-index。
docker logs 7b
2018-06-29 13:56:41 +0000 [info]: reading config file path="/fluentd/etc/fluent.conf"
2018-06-29 13:56:41 +0000 [info]: starting fluentd-0.12.19
2018-06-29 13:56:41 +0000 [info]: gem 'fluent-plugin-elasticsearch' version '1.4.0'
2018-06-29 13:56:41 +0000 [info]: gem 'fluent-plugin-rename-key' version '0.1.3'
2018-06-29 13:56:41 +0000 [info]: gem 'fluentd' version '0.12.19'
2018-06-29 13:56:41 +0000 [info]: gem 'fluentd' version '0.10.61'
2018-06-29 13:56:41 +0000 [info]: adding filter pattern="**" type="record_transformer"
2018-06-29 13:56:41 +0000 [info]: adding match pattern="docker.*" type="rename_key"
2018-06-29 13:56:41 +0000 …我有来源:
<source>
    @type tail
    tag service
    path /tmp/l.log
    format json
    read_from_head true
</source>
我想对它做几个过滤器,并将match它的输出到几个输出:
<source>
    @type tail
    tag service.pi2
    path /tmp/out.log
    format json
    read_from_head true
</source>
<source>
    @type tail
    tag service.data
    path /tmp/out.log
    format json
    read_from_head true
</source>
<filter service.data>
   # some filtering
</filter>
<filter service.pi2>
   # some filtering
</filter>
<match service.data>
  @type file
  path /tmp/out/data
</match>
<match service.pi2>
  @type file
  path /tmp/out/pi
</match>
到目前为止,为了使一切正常,我必须source使用不同的标签进行复制。我可以让它从一个源定义工作吗?
我的 docker-compose 文件中有以下配置:
 fluentd:
    build: ./fluentd
    container_name: fluentd
    expose:
    - 24224
    - 24224/udp
    depends_on:
    - "elasticsearch"
    networks:
    -  internal
 public-site:
    build: ./public-site
    container_name: public-site
    depends_on:
    - fluentd
    logging:
      driver: fluentd
      options:
        tag: public-site
    networks:
    -  internal
networks:
  internal:
当我使用 启动应用程序时docker-compose up,网络服务器存在错误消息ERROR: for public-site Cannot start service public-site: failed to initialize logging driver: dial tcp 127.0.0.1:24224: connect: connection denied。
另一方面,当我从 fluentd ( ports: 24224:24224)发布端口时,它可以工作。问题是我不想在主机上发布这些端口,因为它绕过了 linux 防火墙(即它向所有人公开了 fluentd 端口,请参阅此处)。
这令人困惑,因为公开端口应该使其可用于网络中的每个容器。我使用的是fluentd 和网络服务器之间的内部网络,所以我希望 fluentd 的暴露端口就足够了(事实并非如此)。 …
我是流利的新手。我已经配置了我需要的基本 fluentd 设置并将其作为守护程序集部署到我的 kubernetes 集群。我看到日志已发送到我的第 3 方日志记录解决方案。但是,我现在想要处理一些作为多个条目进入的日志,而它们确实应该是一个条目。来自节点的日志看起来像 json 并且格式如下
{"log":"2019-09-23 18:54:42,102 [INFO] some message \n","stream":"stderr","time":"2019-09-23T18:54:42.102Z"}
{"log": "another message \n","stream":"stderr","time":"2019-09-23T18:54:42.102Z"}
我有一个看起来像的配置图
apiVersion: v1
kind: ConfigMap
metadata:
  name: fluent-config-map
  namespace: logging
  labels:
    k8s-app: fluentd-logzio
data:
  fluent.conf: |-
@include "#{ENV['FLUENTD_SYSTEMD_CONF'] || 'systemd'}.conf"
@include kubernetes.conf
@include conf.d/*.conf
<match fluent.**>
    # this tells fluentd to not output its log on stdout
    @type null
</match>
# here we read the logs from Docker's containers and parse them
<source>
  @id fluentd-containers.log
  @type tail
  path /var/log/containers/*.log
  pos_file /var/log/es-containers.log.pos …我需要一些帮助来解决以下问题。
我有一个spring boot应用程序,我想fluentd使用logback.
我创建了一个名为logback.xmlmy的文件,src/main/resources内容如下:
<?xml version="1.0" encoding="UTF-8"?>
<configuration debug="true">
    <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
        <encoder>
            <pattern>%date - %level - [%thread] - %logger - [%file:%line] - %msg%n</pattern>
        </encoder>
    </appender>
    <appender name="FLUENT_TEXT" class="ch.qos.logback.more.appenders.DataFluentAppender">
        <tag>dab</tag>
        <label>normal</label>
        <remoteHost>localhost</remoteHost>
        <port>24224</port>
        <maxQueueSize>20</maxQueueSize>
    </appender>
    <logger name="org.com" level="DEBUG"/>
    <root level="DEBUG">
        <appender-ref ref="STDOUT" />
        <appender-ref ref="FLUENT_TEXT" />
    </root>
</configuration>
在我的build.gradle我有:
compile 'org.fluentd:fluent-logger:0.3.1'
compile 'com.sndyuk:logback-more-appenders:1.1.0'
当我使用 gradle bootRun 启动应用程序时,我收到以下消息:
10:56:33,020 |-WARN in ch.qos.logback.core.ConsoleAppender[STDOUT] - Attempted to append to non started …使用 Kibana,我设法通过以下方式在线性图中可视化连续请求:
Count@timestampTerms字段IP address现在我想从中获得平均、最小和最大会话持续时间。这可能吗,我还没有完全想出从这里开始的正确方法。
我确实在我的 kube 工作人员上安装了Fluentd-kubernetes-daemonset,其中一个工作没有任何错误,但另一个则抛出以下错误:
2018-12-07 03:48:33 +0000 [warn]: #0 [in_systemd_bootkube] Systemd::JournalError: No such file or directory retrying in 1s
2018-12-07 03:48:36 +0000 [warn]: #0 [in_systemd_kubelet] Systemd::JournalError: No such file or directory retrying in 1s
2018-12-07 03:48:39 +0000 [warn]: #0 [in_systemd_bootkube] Systemd::JournalError: No such file or directory retrying in 1s
2018-12-07 03:48:40 +0000 [warn]: #0 [in_systemd_docker] Systemd::JournalError: No such file or directory retrying in 1s
2018-12-07 03:48:45 +0000 [warn]: #0 [in_systemd_kubelet] Systemd::JournalError: No such file or directory retrying in 1s …我正在尝试使用 stable/fluent-bit 作为图表中的子图表。该图表在 values.yaml 中有一个值:
backend:
  es:
    host: elasticsearch
如何在不更改流畅位图的情况下将 backend.es.host 的值设置为 {Release.Name}-elasticsearch 之类的值?
我在我的 kubernetes 集群中使用 fluentd 从 pod 收集日志并将它们发送到 elasticseach。一两天,fluetnd 收到错误:
[warn]: #0 emit transaction failed: error_class=Fluent::Plugin::Buffer::BufferOverflowError error=“buffer space has too many data” location=“/fluentd/vendor/bundle/ruby/2.6.0/gems/fluentd-1.7.4/lib/fluent/plugin/buffer.rb:265:in `write’”
并且 fluentd 停止发送日志,直到我重置 fluentd pod。
如何避免出现此错误?
也许我需要更改配置中的某些内容?
<match filter.Logs.**.System**>
  @type elasticsearch
  host "#{ENV['FLUENT_ELASTICSEARCH_HOST']}"
  port "#{ENV['FLUENT_ELASTICSEARCH_PORT']}"
  scheme "#{ENV['FLUENT_ELASTICSEARCH_SCHEME']}"
  user "#{ENV['FLUENT_ELASTICSEARCH_USER']}"
  password "#{ENV['FLUENT_ELASTICSEARCH_PASSWORD']}"
  logstash_format true
  logstash_prefix system
  type_name systemlog
  time_key_format %Y-%m-%dT%H:%M:%S.%NZ
  time_key time
  log_es_400_reason true
  <buffer>
    flush_thread_count "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_FLUSH_THREAD_COUNT'] || '8'}"
    flush_interval "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_FLUSH_INTERVAL'] || '5s'}"
    chunk_limit_size "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_CHUNK_LIMIT_SIZE'] || '8M'}"
    queue_limit_length "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_QUEUE_LIMIT_LENGTH'] || '32'}"
    retry_max_interval "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_RETRY_MAX_INTERVAL'] || '30'}"
    retry_forever true
  </buffer> …fluentd ×10
kubernetes ×4
docker ×3
logging ×3
fluent-bit ×1
json ×1
kibana ×1
logback ×1
session ×1
spring ×1
spring-boot ×1
statistics ×1