fluentd loses milliseconds and now log messages are stored out of order in elasticsearch

Dav*_*ell 8 elasticsearch fluentd kibana

I am using fluentd to centralize log messages in elasticsearch and view them with kibana. When I view log messages, messages that occured in the same second are out of order and the milliseconds in @timestamp is all zeros

2015-01-13T11:54:01.000-06:00   DEBUG   my message
Run Code Online (Sandbox Code Playgroud)

How do I get fluentd to store milliseconds?

Dav*_*ell 13

流利者目前不支持亚秒级分辨率:https: //github.com/fluent/fluentd/issues/461

我通过向所有日志消息添加一个新字段来解决这个问题,使用record_reformer来存储自纪元以来的纳秒

例如,如果你的流利者有一些像这样的输入:

#
# Syslog
#
<source>
    type syslog
    port 5140
    bind localhost
    tag syslog
</source>

#
# Tomcat log4j json output
#
<source>
    type tail
    path /home/foo/logs/catalina-json.out
    pos_file /home/foo/logs/fluentd.pos
    tag tomcat
    format json
    time_key @timestamp
    time_format "%Y-%m-%dT%H:%M:%S.%L%Z"
</source>
Run Code Online (Sandbox Code Playgroud)

然后将它们更改为如下所示并添加一个添加纳秒字段的record_reformer

#
# Syslog
#
<source>
    type syslog
    port 5140
    bind localhost
    tag cleanup.syslog
</source>

#
# Tomcat log4j json output
#
<source>
    type tail
    path /home/foo/logs/catalina-json.out
    pos_file /home/foo/logs/fluentd.pos
    tag cleanup.tomcat
    format json
    time_key @timestamp
    time_format "%Y-%m-%dT%H:%M:%S.%L%Z"
</source>

<match cleanup.**>
    type record_reformer
    time_nano ${t = Time.now; ((t.to_i * 1000000000) + t.nsec).to_s}
    tag ${tag_suffix[1]}
</match>
Run Code Online (Sandbox Code Playgroud)

然后将time_nano字段添加到kibana仪表板并使用它来排序而不是@timestamp,一切都将按顺序排列.