解析多行消息时 Grok 解析错误

Grok parse error while parsing multiple line messages

我正在尝试找出用于解析多条消息(例如异常跟踪)的 grok 模式,下面就是这样一个日志

2017-03-30 14:57:41 [12345] [qtp1533780180-12] ERROR com.app.XYZ - Exception occurred while processing
java.lang.NullPointerException: null
        at spark.webserver.MatcherFilter.doFilter(MatcherFilter.java:162)
        at spark.webserver.JettyHandler.doHandle(JettyHandler.java:61)
        at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:189)
        at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
        at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:119)
        at org.eclipse.jetty.server.Server.handle(Server.java:517)
        at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:302)
        at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:242)
        at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:245)
        at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
        at org.eclipse.jetty.io.SelectChannelEndPoint.run(SelectChannelEndPoint.java:75)
        at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceAndRun(ExecuteProduceConsume.java:213)
        at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:147)
        at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:654)
        at org.eclipse.jetty.util.thread.QueuedThreadPool.run(QueuedThreadPool.java:572)
        at java.lang.Thread.run(Thread.java:745)

这是我的logstash.conf

    input {
  file {
    path => ["/debug.log"]
    codec => multiline {
      # Grok pattern names are valid! :)
      pattern => "^%{TIMESTAMP_ISO8601} "
      negate => true
      what => previous
    }
  }
}

filter {

  mutate {
    gsub => ["message", "r", ""]
  }
  grok {
    match => [ "message", "%{TIMESTAMP_ISO8601:timestamp} \[%{NOTSPACE:uid}\] \[%{NOTSPACE:thread}\] %{LOGLEVEL:loglevel} %{DATA:class}\-%{GREEDYDATA:message}" ]
    overwrite => [ "message" ]
  }
  date {
    match => [ "timestamp" , "yyyy-MM-dd HH:mm:ss" ]
  }
}


output {
  elasticsearch { hosts => localhost }
  stdout { codec => rubydebug }
}

这适用于单行日志解析,但在

中失败

0]“_grokparsefailure”

对于多行异常跟踪

有人可以建议我解析多行日志的正确过滤模式吗?

如果您正在使用多行日志,请使用 logstash 提供的多行过滤器。您首先需要在多行过滤器中区分新记录的开始。从您的日志中我可以看到新记录以 "TIMESTAMP" 开头,下面是示例用法。

用法示例::

filter {
  multiline {
    type => "/debug.log"
    pattern => "^%{TIMESTAMP}"
    what => "previous"
 }
}

然后您可以使用 Gsub 替换“\n”和“\r”,它们将由多行过滤器添加到您的记录中。之后使用 Grok。

上面的 logstash 配置在删除

后工作正常

变异{ gsub => ["message", "r", ""] }

因此,用于为上述日志模式解析单行和多行输入的工作 logstash 配置

input {
  file {
    path => ["./debug.log"]
    codec => multiline {
      # Grok pattern names are valid! :)
      pattern => "^%{TIMESTAMP_ISO8601} "
      negate => true
      what => previous
    }
  }
}

filter {
  grok {
    match => [ "message", "%{TIMESTAMP_ISO8601:timestamp} \[%{NOTSPACE:uid}\] \[%{NOTSPACE:thread}\] %{LOGLEVEL:loglevel} %{DATA:class}\-%{GREEDYDATA:message}" ]
    overwrite => [ "message" ]
  }
  date {
    match => [ "timestamp" , "yyyy-MM-dd HH:mm:ss" ]
  }
}


output {
  elasticsearch { hosts => localhost }
  stdout { codec => rubydebug }
}