我的grok过滤器(工作)的一部分抓取以下两个字段:
%{NUMBER:XCent}%{NUMBER:YCent}
这是拉特,长点.
我正在尝试添加一个位置引脚,但是当我在配置文件上使用--debug标志时,仍然会出现配置失败
在我到达本节之前,我的所有配置都有效.
if [XCent] and [YCent] {
mutate {
add_field => {
"[location][lat]" => "%{XCent}"
"[location][lon]" => "%{YCent}"
}
}
mutate {
convert => {
"[location][lat]" => "float"
"[location][lon]" => "float"
}
}
mutate {
convert => {"[location]", "geo_point"}
}
}
Run Code Online (Sandbox Code Playgroud)
我的想法是,这基本上是logstash 1.4的弹性文档所建议的
https://www.elastic.co/guide/en/elasticsearch/reference/1.4/mapping-geo-point-type.html
编辑:找到更好的方法在过滤器,更新的代码中应用配置.
我有以下形式的nginx错误日志: -
2015/09/30 22:19:38 [错误] 32317#0:*23 [lua] responses.lua:61:handler():Cassandra错误:UNIQUE检查错误:Cassandra错误:连接被拒绝,客户端:127.0. 0.1,server :, request:"POST/consumers/HTTP/1.1",host:"localhost:8001"
如上所述,我能够解析这些日志.
我的过滤器配置如下: -
filter {
grok {
match => {
"message" => [
"%{DATESTAMP:mydate} \[%{DATA:severity}\] (%{NUMBER:pid:int}#%{NUMBER}: \*%{NUMBER}|\*%{NUMBER}) %{GREEDYDATA:mymessage}",
"%{DATESTAMP:mydate} \[%{DATA:severity}\] %{GREEDYDATA:mymessage}",
"%{DATESTAMP:mydate} %{GREEDYDATA:mymessage}"
]
}
add_tag => ["nginx_error_pattern"]
}
if ("nginx_error_pattern" in [tags]) {
grok {
match => {
"mymessage" => [
"server: %{DATA:[request_server]},"
]
}
}
grok {
match => {
"mymessage" => [
"host: \"%{IPORHOST:[request_host]}:%{NUMBER:[port]}\""
]
}
}
grok {
match => {
"mymessage" => …Run Code Online (Sandbox Code Playgroud) 我正在尝试在我的 amazon Linux AMI EC2 实例上安装这个插件。正常安装使用bin/logstash-plugin install logstash-output-amazon_es给我错误:
Error Bundler::InstallError, retrying 1/10 安装 faraday_middleware (0.10.0) 时出错,Bundler 无法继续。
gem install faraday_middleware -v '0.10.0'在捆绑之前确保成功
所以我尝试克隆存储库并使用gem build logstash-output-amazon_es.gemspec. 这成功了:
sudo bin/logstash-plugin install logstash-output-amazon_es-0.3.gem
验证logstash-output-amazon_es-0.3.gem
安装logstash-output-amazon_es
安装成功
但是当我configtest在我的 logstash 配置文件上做一个时,它会抛出一个错误:
给定的配置无效。原因:找不到任何名为“amazon_es”的输出插件。你确定这是正确的吗?尝试加载 amazon_es 输出插件导致此错误:没有要加载的此类文件 -- logstash/outputs/amazon_es {:level=>:fatal}
我在这里做错了什么?
amazon-ec2 amazon-web-services elasticsearch logstash logstash-configuration
我正在通过 syslog 驱动程序将我的 docker 日志转发到logstash。这对于正常的日志行非常有用,但对于多行有问题。我遇到的问题是 docker 日志转发将 syslog 消息格式添加到每个日志行。如果我使用logstash过滤器多行(logstash不推荐),我可以成功组合多行并删除附加行上的系统日志消息......但是,这不是线程安全的。我无法通过logstash 推荐的输入编解码器让逻辑工作。
例如:
Docker命令:
docker run --rm -it \
--log-driver syslog \
--log-opt syslog-address=tcp://localhost:15008 \
helloWorld:latest
Run Code Online (Sandbox Code Playgroud)
登录 docker 容器:
Log message A
<<ML>> Log message B
more B1
more B2
more B3
Log message C
Run Code Online (Sandbox Code Playgroud)
接收到logstash 中的日志
<30>Jul 13 16:04:36 [1290]: Log message A
<30>Jul 13 16:04:37 [1290]: <<ML>> Log message B
<30>Jul 13 16:04:38 [1290]: more B1
<30>Jul 13 16:04:39 [1290]: more B2
<30>Jul 13 16:04:40 [1290]: more B3
<30>Jul …Run Code Online (Sandbox Code Playgroud) 以下是教程中的测试命令:
./logstash -e 'input { stdin { } } output { stdout {} }'
Run Code Online (Sandbox Code Playgroud)
以下是错误。
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path //usr/share/logstash/config/log4j2.properties. Using default config which logs to console
01:55:14.242 [main] FATAL logstash.runner - An unexpected error occurred! {:error=>#<ArgumentError: Path "/usr/share/logstash/data" must be a writable directory. It is not writable.>, :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/settings.rb:433:in `validate'", "/usr/share/logstash/logstash-core/lib/logstash/settings.rb:216:in `validate_value'", "/usr/share/logstash/logstash-core/lib/logstash/settings.rb:132:in …Run Code Online (Sandbox Code Playgroud) 我已经安装了riemann-0.2.13-1.noarch作为RPM服务.我从logstash 5.2.1发送事件,我安装的插件是logstash-output-riemann-3.0.0我写了下面的riemann代码来触发电子邮件.我越来越异常了.请指出我哪里出错了.
黎曼配置:
(let [email (mailer {:host "XXXXX"
:port 25
:subject (fn [events] "Consecutive login failed")
:body (fn [events] "Hello Team, \n \n There are more consecutive logins failure occured @" (riemann.common/time-at (:timestamp event))"")
:from "XXXX"})]
(streams
;Check for every 120sec events
(fixed-time-window
120
(smap
(fn [events]
(let [count-of-failures (count (filter #(re-find #"com.thed.server.access.Exception. Please reset credentials for user. Last error occurred was:Authentication failed, please check user credentials*" (:message %)) events))] ;Calculate the count for matched value
(event
{:status "Class failures" …Run Code Online (Sandbox Code Playgroud) 我通过http poller获得一个json
{
"id":12345
"name":"",
"lastname":"",
"age":12,
"address":{"city":"XXXX" , "street":"ZZZZ" }
}
Run Code Online (Sandbox Code Playgroud)
我想在我的输出中生成两个文档:
人:
{
"id":12345
"name":"",
"lastname":"",
"age":12
}
Run Code Online (Sandbox Code Playgroud)
地址 :
{
"city":"XXXX" ,
"street":"ZZZZ"
}
Run Code Online (Sandbox Code Playgroud)
意思是我在输入中有一个事件
在输入阶段获得一个输入:
input {
http_poller {
urls => {
test1 => "http://localhost:8080"
}
}
Run Code Online (Sandbox Code Playgroud)
在过滤阶段,我想:
在输出阶段,我想:
使用 Logstash JDBC 输入插件获取数据时遇到此异常:
error:
26413962
Sequel::InvalidValue
TZInfo::AmbiguousTime: 2017-11-05T01:30:00+00:00 is an ambiguous local time.
Run Code Online (Sandbox Code Playgroud)
这可能是因为我已经使用以下参数在 JDBC 插件中转换时区:
jdbc_default_timezone => "America/New_York"
Run Code Online (Sandbox Code Playgroud)
因此,11 月 5 日凌晨 1:30 发生了两次,我怀疑 Logstash 不知道该怎么做,陷入了无限循环。
作为解决方法,我删除了 jdbc_default_timezone 参数,而是在 select 语句中将值转换为 UTC,如下所示:
error:
26413962
Sequel::InvalidValue
TZInfo::AmbiguousTime: 2017-11-05T01:30:00+00:00 is an ambiguous local time.
Run Code Online (Sandbox Code Playgroud)
但这种解决方法很烦人,因为我需要修改所有 Logstash 输入日期列。
有没有办法强制它选择两个可能时间中的任何一个,或者有任何更优雅的方式?
我正在尝试从 filebeat->logstash->elastic search 发送日志文件。文件拍子.yml。但是我在 filebeat 日志中收到以下错误:
2017-12-07T16:15:38+05:30 ERR Failed to connect: dial tcp [::1]:5044: connectex: No connection could be made because the target machine actively refused it.
Run Code Online (Sandbox Code Playgroud)
我的filebeat和logstash配置如下:
1.filebeat.yml
filebeat.prospectors:
- input_type: log
paths:
- C:\Users\shreya\Data\mylog.log
document_type: springlog
multiline.pattern: ^\[[0-9]{4}-[0-9]{2}-[0-9]{2}
multiline.negate: true
multiline.match: before
output.logstash:
hosts: ["localhost:5044"]
Run Code Online (Sandbox Code Playgroud)
2.logstash.yml
http.host: "127.0.0.1"
http.port: 5044
Run Code Online (Sandbox Code Playgroud)
3.logstash conf文件:
input {
beats {
port => 5044
codec => multiline {
pattern => "^(%{TIMESTAMP_ISO8601})"
negate => true
what => "previous"
}
}
}
filter {
grok{ …Run Code Online (Sandbox Code Playgroud) 我使用 docker-compose 来运行 ELKB。我的主要目标是启动elasticsearch 和logstash 容器。Logstash容器应该成功与elasticsearch连接,并将日志传递给elasticsearch进行进一步的搜索或处理。
但在不知不觉中原因logstash容器应该频繁停止。我需要保留在logstash和elasticsearch容器中,但它没有发生。
我不知道是什么原因导致logstash容器反复关闭。
我使用elasticsearch:7.6.3和logstash:7.6.3
请检查下面的代码并指导我在哪里犯了错误。
docker-compose.yml
# Docker version 19.03.5
# docker-compose version 1.25.3
version: "3.7"
services:
elasticsearch:
container_name: elasticsearch
build:
context: ./elasticsearch
dockerfile: Dockerfile
ports:
- 9200:9200
- 9300:9300
volumes:
- ./elasticsearch/data:/usr/share/elasticsearch/data:rw
- ./elasticsearch/logs:/usr/share/elasticsearch/logs:rw
restart: always
ulimits:
memlock:
soft: -1
hard: -1
networks:
- elkb
logstash:
container_name: logstash
build:
context: ./logstash
dockerfile: Dockerfile
ports:
- 9600:9600
- 5000:5000/udp
- 5000:5000/tcp
volumes:
- ./logstash/input-logs:/usr/share/logstash/logs
- ./logstash/data:/var/lib/logstash:rw
- ./logstash/logs:/var/logs/logstash:rw
restart: always
ulimits:
memlock:
soft: -1
hard: …Run Code Online (Sandbox Code Playgroud) docker docker-compose logstash-configuration elasticsearch-7