具有 java 堆栈跟踪的 logstash 多行编解码器
logstash multiline codec with java stack trace
我正在尝试使用 grok 解析日志文件。我使用的配置允许我解析单行事件,但如果是多行事件则不能(使用 java 堆栈跟踪)。
#what i get on KIBANA for a single line:
{
"_index": "logstash-2015.02.05",
"_type": "logs",
"_id": "mluzA57TnCpH-XBRbeg",
"_score": null,
"_source": {
"message": " - 2014-01-14 11:09:35,962 [main] INFO (api.batch.ThreadPoolWorker) user.country=US",
"@version": "1",
"@timestamp": "2015-02-05T09:38:21.310Z",
"path": "/root/test2.log",
"time": "2014-01-14 11:09:35,962",
"main": "main",
"loglevel": "INFO",
"class": "api.batch.ThreadPoolWorker",
"mydata": " user.country=US"
},
"sort": [
1423129101310,
1423129101310
]
}
#what i get for a multiline with Stack trace:
{
"_index": "logstash-2015.02.05",
"_type": "logs",
"_id": "9G6LsSO-aSpsas_jOw",
"_score": null,
"_source": {
"message": "\tat oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:20)",
"@version": "1",
"@timestamp": "2015-02-05T09:38:21.380Z",
"path": "/root/test2.log",
"tags": [
"_grokparsefailure"
]
},
"sort": [
1423129101380,
1423129101380
]
}
input {
file {
path => "/root/test2.log"
start_position => "beginning"
codec => multiline {
pattern => "^ - %{TIMESTAMP_ISO8601} "
negate => true
what => "previous"
}
}
}
filter {
grok {
match => [ "message", " -%{SPACE}%{SPACE}%{TIMESTAMP_ISO8601:time} \[%{WORD:main}\] %{LOGLEVEL:loglevel}%{SPACE}%{SPACE}\(%{JAVACLASS:class}\) %{GREEDYDATA:mydata} %{JAVASTACKTRACEPART}"]
}
date {
match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]
}
}
output {
elasticsearch {
host => "194.3.227.23"
}
# stdout { codec => rubydebug}
}
任何人都可以告诉我我在配置文件上做错了什么吗?谢谢。
这是我的日志文件的示例:
- 2014-01-14 11:09:36,447 [main] INFO (support.context.ContextFactory) 创建默认上下文
- 2014-01-14 11:09:38,623 [main] 错误 (support.context.ContextFactory) 获取数据库连接时出错 jdbc:oracle:thin:@HAL9000:1521:DEVPRINT,用户 cisuser 和驱动程序 oracle.jdbc.driver.OracleDriver
java.sql.SQLException: ORA-28001: 密码已过期
在 oracle.jdbc.driver.SQLStateMapping.newSQLException(SQLStateMapping.java:70)
在 oracle.jdbc.driver.DatabaseError.newSQLException(DatabaseError.java:131)
**
*> 编辑:这是我使用的最新配置
https://gist.github.com/anonymous/9afe80ad604f9a3d3c00#file-output-L1*
**
第一点,当用文件输入重复测试时,一定要使用sincedb_path => "/dev/null" 确保从文件的开头。
关于多行,你的问题内容或多行模式肯定有问题,因为事件的 none 有 multiline 标签,由聚合行时的多行编解码器或过滤器。
您的 message 字段应包含由换行符分隔的所有行 \n (\r\n 在我的例子中是 windows ).这是输入配置的预期输出
{
"@timestamp" => "2015-02-10T11:03:33.298Z",
"message" => " - 2014-01-14 11:09:35,962 [main] INFO (api.batch.ThreadPoolWorker) user.country=US\r\n\tat oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:20\r",
"@version" => "1",
"tags" => [
[0] "multiline"
],
"host" => "localhost",
"path" => "/root/test.file"
}
关于 grok,如果你想匹配多行字符串,你应该使用这样的模式。
filter {
grok {
match => {"message" => [
"(?m)^ -%{SPACE}%{TIMESTAMP_ISO8601:time} \[%{WORD:main}\] % {LOGLEVEL:loglevel}%{SPACE}\(%{JAVACLASS:class}\) %{DATA:mydata}\n%{GREEDYDATA:stack}",
"^ -%{SPACE}%{TIMESTAMP_ISO8601:time} \[%{WORD:main}\] %{LOGLEVEL:loglevel}%{SPACE}\(%{JAVACLASS:class}\) %{GREEDYDATA:mydata}"]
}
}
}
(?m) 前缀指示正则表达式引擎进行多行匹配。
然后你会得到一个像
这样的事件
{
"@timestamp" => "2015-02-10T10:47:20.078Z",
"message" => " - 2014-01-14 11:09:35,962 [main] INFO (api.batch.ThreadPoolWorker) user.country=US\r\n\tat oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:20\r",
"@version" => "1",
"tags" => [
[0] "multiline"
],
"host" => "localhost",
"path" => "/root/test.file",
"time" => "2014-01-14 11:09:35,962",
"main" => "main",
"loglevel" => "INFO",
"class" => "api.batch.ThreadPoolWorker",
"mydata" => " user.country=US\r",
"stack" => "\tat oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:20\r"
}
您可以使用此在线工具构建和验证您的多线模式 http://grokconstructor.appspot.com/do/match
最后一个警告,如果您在路径设置中使用列表或通配符,则使用多行编解码器的 Logstash 文件输入目前存在一个错误,该错误会混淆来自多个文件的内容。唯一的解决方法是使用 多行过滤器
HTH
编辑:我专注于多行字符串,您需要为非单行字符串添加类似的模式
我正在尝试使用 grok 解析日志文件。我使用的配置允许我解析单行事件,但如果是多行事件则不能(使用 java 堆栈跟踪)。
#what i get on KIBANA for a single line:
{
"_index": "logstash-2015.02.05",
"_type": "logs",
"_id": "mluzA57TnCpH-XBRbeg",
"_score": null,
"_source": {
"message": " - 2014-01-14 11:09:35,962 [main] INFO (api.batch.ThreadPoolWorker) user.country=US",
"@version": "1",
"@timestamp": "2015-02-05T09:38:21.310Z",
"path": "/root/test2.log",
"time": "2014-01-14 11:09:35,962",
"main": "main",
"loglevel": "INFO",
"class": "api.batch.ThreadPoolWorker",
"mydata": " user.country=US"
},
"sort": [
1423129101310,
1423129101310
]
}
#what i get for a multiline with Stack trace:
{
"_index": "logstash-2015.02.05",
"_type": "logs",
"_id": "9G6LsSO-aSpsas_jOw",
"_score": null,
"_source": {
"message": "\tat oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:20)",
"@version": "1",
"@timestamp": "2015-02-05T09:38:21.380Z",
"path": "/root/test2.log",
"tags": [
"_grokparsefailure"
]
},
"sort": [
1423129101380,
1423129101380
]
}
input {
file {
path => "/root/test2.log"
start_position => "beginning"
codec => multiline {
pattern => "^ - %{TIMESTAMP_ISO8601} "
negate => true
what => "previous"
}
}
}
filter {
grok {
match => [ "message", " -%{SPACE}%{SPACE}%{TIMESTAMP_ISO8601:time} \[%{WORD:main}\] %{LOGLEVEL:loglevel}%{SPACE}%{SPACE}\(%{JAVACLASS:class}\) %{GREEDYDATA:mydata} %{JAVASTACKTRACEPART}"]
}
date {
match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]
}
}
output {
elasticsearch {
host => "194.3.227.23"
}
# stdout { codec => rubydebug}
}
任何人都可以告诉我我在配置文件上做错了什么吗?谢谢。 这是我的日志文件的示例: - 2014-01-14 11:09:36,447 [main] INFO (support.context.ContextFactory) 创建默认上下文 - 2014-01-14 11:09:38,623 [main] 错误 (support.context.ContextFactory) 获取数据库连接时出错 jdbc:oracle:thin:@HAL9000:1521:DEVPRINT,用户 cisuser 和驱动程序 oracle.jdbc.driver.OracleDriver java.sql.SQLException: ORA-28001: 密码已过期 在 oracle.jdbc.driver.SQLStateMapping.newSQLException(SQLStateMapping.java:70) 在 oracle.jdbc.driver.DatabaseError.newSQLException(DatabaseError.java:131) **
*> 编辑:这是我使用的最新配置
https://gist.github.com/anonymous/9afe80ad604f9a3d3c00#file-output-L1*
**
第一点,当用文件输入重复测试时,一定要使用sincedb_path => "/dev/null" 确保从文件的开头。
关于多行,你的问题内容或多行模式肯定有问题,因为事件的 none 有 multiline 标签,由聚合行时的多行编解码器或过滤器。 您的 message 字段应包含由换行符分隔的所有行 \n (\r\n 在我的例子中是 windows ).这是输入配置的预期输出
{
"@timestamp" => "2015-02-10T11:03:33.298Z",
"message" => " - 2014-01-14 11:09:35,962 [main] INFO (api.batch.ThreadPoolWorker) user.country=US\r\n\tat oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:20\r",
"@version" => "1",
"tags" => [
[0] "multiline"
],
"host" => "localhost",
"path" => "/root/test.file"
}
关于 grok,如果你想匹配多行字符串,你应该使用这样的模式。
filter {
grok {
match => {"message" => [
"(?m)^ -%{SPACE}%{TIMESTAMP_ISO8601:time} \[%{WORD:main}\] % {LOGLEVEL:loglevel}%{SPACE}\(%{JAVACLASS:class}\) %{DATA:mydata}\n%{GREEDYDATA:stack}",
"^ -%{SPACE}%{TIMESTAMP_ISO8601:time} \[%{WORD:main}\] %{LOGLEVEL:loglevel}%{SPACE}\(%{JAVACLASS:class}\) %{GREEDYDATA:mydata}"]
}
} }
(?m) 前缀指示正则表达式引擎进行多行匹配。 然后你会得到一个像
这样的事件{
"@timestamp" => "2015-02-10T10:47:20.078Z",
"message" => " - 2014-01-14 11:09:35,962 [main] INFO (api.batch.ThreadPoolWorker) user.country=US\r\n\tat oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:20\r",
"@version" => "1",
"tags" => [
[0] "multiline"
],
"host" => "localhost",
"path" => "/root/test.file",
"time" => "2014-01-14 11:09:35,962",
"main" => "main",
"loglevel" => "INFO",
"class" => "api.batch.ThreadPoolWorker",
"mydata" => " user.country=US\r",
"stack" => "\tat oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:20\r"
}
您可以使用此在线工具构建和验证您的多线模式 http://grokconstructor.appspot.com/do/match
最后一个警告,如果您在路径设置中使用列表或通配符,则使用多行编解码器的 Logstash 文件输入目前存在一个错误,该错误会混淆来自多个文件的内容。唯一的解决方法是使用 多行过滤器
HTH
编辑:我专注于多行字符串,您需要为非单行字符串添加类似的模式