Logstash logback 编码器、logstash 转发器和 logstash
Logstash logback encoder, logstash forwarder and logstash
接受建议 https://blog.codecentric.de/en/2014/10/log-management-spring-boot-applications-logstash-elastichsearch-kibana/ 我已经设置了 logstash 编码器 + logstash 转发器以将所有内容推送到我的 logstash deamon 并最终在 ElasticSearch 中索引所有内容。
这是我的配置:
logstash.xml
<included>
<include resource="org/springframework/boot/logging/logback/base.xml"/>
<property name="FILE_LOGSTASH" value="${LOG_FILE:-${LOG_PATH:-${LOG_TEMP:-${java.io.tmpdir:-/tmp}}/}spring.log}.json"/>
<appender name="LOGSTASH"
class="ch.qos.logback.core.rolling.RollingFileAppender">
<encoder>
<pattern>${FILE_LOG_PATTERN}</pattern>
</encoder>
<file>${FILE_LOGSTASH}</file>
<rollingPolicy class="ch.qos.logback.core.rolling.FixedWindowRollingPolicy">
<fileNamePattern>${FILE_LOGSTASH}.%i</fileNamePattern>
</rollingPolicy>
<triggeringPolicy
class="ch.qos.logback.core.rolling.SizeBasedTriggeringPolicy">
<MaxFileSize>10MB</MaxFileSize>
</triggeringPolicy>
<encoder class="net.logstash.logback.encoder.LogstashEncoder">
<includeCallerInfo>true</includeCallerInfo>
</encoder>
</appender>
<root level="INFO">
<appender-ref ref="LOGSTASH"/>
</root>
</included>
logstash-forwarder.conf
{
"network": {
"servers": [
"logstash:5043"
],
"ssl certificate": "/etc/pki/tls/certs/logstash-forwarder/logstash-forwarder.crt",
"ssl key": "/etc/pki/tls/private/logstash-forwarder/logstash-forwarder.key",
"ssl ca": "/etc/pki/tls/certs/logstash-forwarder/logstash-forwarder.crt",
"timeout": 15
},
"files": [
{
"paths": [
"${ENV_SERVICE_LOG}/*.log.json"
],
"fields": {
"type": "${ENV_SERVICE_NAME}"
}
}
]
}
logstash.conf
input {
lumberjack {
port => 5043
ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder/logstash-forwarder.crt"
ssl_key => "/etc/pki/tls/private/logstash-forwarder/logstash-forwarder.key"
}
}
output {
elasticsearch { host => "localhost" }
}
一切正常,日志正在保存在 ElasticSearch 中。
此时我希望能够指定要由 ElasticSearch 编制索引的其他字段,例如日志级别。在@message 内容中搜索是否存在错误或警告并没有多大用处。
我该怎么做?我应该更改哪个配置才能使级别在 ElasticSearch 中显示为索引字段?
您正在寻找的是一个 logstash filter,它将在您的索引器上用作输入和输出节的对等体。
有大量过滤器(请参阅 the doc), but you would use grok{} 将正则表达式应用于您的消息字段并提取日志级别。
您没有包含示例消息,但是,给定一个像 "foo 123 bar" 这样的字符串,此模式会将“123”提取到一个名为 loglevel 的整数字段中:
grok {
match => ["message", "foo %{NUMBER:loglevel:int} bar"]
}
网上有大量关于编写 grok 模式的信息。试试 this one.
logstash config file:
input {
file {
path => [ "/tmp/web.log" ]
}
}
filter {
grok {
match => [ "%{TIMESTAMP_ISO8601:timestamp} %{LOGLEVEL:severity} %{GREEDYDATA:message}" ]
}
}
output {
elasticsearch {
host => "127.0.0.1"
index => "web-%{+YYYY.MM.dd}"
}
}
您可以使用'add_tag'或'add_field'指定特殊字段
接受建议 https://blog.codecentric.de/en/2014/10/log-management-spring-boot-applications-logstash-elastichsearch-kibana/ 我已经设置了 logstash 编码器 + logstash 转发器以将所有内容推送到我的 logstash deamon 并最终在 ElasticSearch 中索引所有内容。
这是我的配置:
logstash.xml
<included>
<include resource="org/springframework/boot/logging/logback/base.xml"/>
<property name="FILE_LOGSTASH" value="${LOG_FILE:-${LOG_PATH:-${LOG_TEMP:-${java.io.tmpdir:-/tmp}}/}spring.log}.json"/>
<appender name="LOGSTASH"
class="ch.qos.logback.core.rolling.RollingFileAppender">
<encoder>
<pattern>${FILE_LOG_PATTERN}</pattern>
</encoder>
<file>${FILE_LOGSTASH}</file>
<rollingPolicy class="ch.qos.logback.core.rolling.FixedWindowRollingPolicy">
<fileNamePattern>${FILE_LOGSTASH}.%i</fileNamePattern>
</rollingPolicy>
<triggeringPolicy
class="ch.qos.logback.core.rolling.SizeBasedTriggeringPolicy">
<MaxFileSize>10MB</MaxFileSize>
</triggeringPolicy>
<encoder class="net.logstash.logback.encoder.LogstashEncoder">
<includeCallerInfo>true</includeCallerInfo>
</encoder>
</appender>
<root level="INFO">
<appender-ref ref="LOGSTASH"/>
</root>
</included>
logstash-forwarder.conf
{
"network": {
"servers": [
"logstash:5043"
],
"ssl certificate": "/etc/pki/tls/certs/logstash-forwarder/logstash-forwarder.crt",
"ssl key": "/etc/pki/tls/private/logstash-forwarder/logstash-forwarder.key",
"ssl ca": "/etc/pki/tls/certs/logstash-forwarder/logstash-forwarder.crt",
"timeout": 15
},
"files": [
{
"paths": [
"${ENV_SERVICE_LOG}/*.log.json"
],
"fields": {
"type": "${ENV_SERVICE_NAME}"
}
}
]
}
logstash.conf
input {
lumberjack {
port => 5043
ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder/logstash-forwarder.crt"
ssl_key => "/etc/pki/tls/private/logstash-forwarder/logstash-forwarder.key"
}
}
output {
elasticsearch { host => "localhost" }
}
一切正常,日志正在保存在 ElasticSearch 中。
此时我希望能够指定要由 ElasticSearch 编制索引的其他字段,例如日志级别。在@message 内容中搜索是否存在错误或警告并没有多大用处。
我该怎么做?我应该更改哪个配置才能使级别在 ElasticSearch 中显示为索引字段?
您正在寻找的是一个 logstash filter,它将在您的索引器上用作输入和输出节的对等体。
有大量过滤器(请参阅 the doc), but you would use grok{} 将正则表达式应用于您的消息字段并提取日志级别。
您没有包含示例消息,但是,给定一个像 "foo 123 bar" 这样的字符串,此模式会将“123”提取到一个名为 loglevel 的整数字段中:
grok {
match => ["message", "foo %{NUMBER:loglevel:int} bar"]
}
网上有大量关于编写 grok 模式的信息。试试 this one.
logstash config file:
input {
file {
path => [ "/tmp/web.log" ]
}
}
filter {
grok {
match => [ "%{TIMESTAMP_ISO8601:timestamp} %{LOGLEVEL:severity} %{GREEDYDATA:message}" ]
}
}
output {
elasticsearch {
host => "127.0.0.1"
index => "web-%{+YYYY.MM.dd}"
}
}
您可以使用'add_tag'或'add_field'指定特殊字段