Datadog Grok 解析 - 从嵌套 JSON 中提取字段

Datadog Grok Parsing - extracting fields from nested JSON

是否可以提取嵌套在日志中的 json 个字段?

我一直在研究的示例:

thread-191555 app.main - [cid: 2cacd6f9-546d-41ew-a7ce-d5d41b39eb8f, uid: e6ffc3b0-2f39-44f7-85b6-1abf5f9ad970] Request: protocol=[HTTP/1.0] method=[POST] path=[/metrics] headers=[Timeout-Access: <function1>, Remote-Address: 192.168.0.1:37936, Host: app:5000, Connection: close, X-Real-Ip: 192.168.1.1, X-Forwarded-For: 192.168.1.1, Authorization: ***, Accept: application/json, text/plain, */*, Referer: https://google.com, Accept-Language: cs-CZ, Accept-Encoding: gzip, deflate, User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; Trident/7.0; rv:11.0) like Gecko, Cache-Control: no-cache] entity=[HttpEntity.Strict application/json {"type":"text","extract": "text", "field2":"text2","duration": 451 }

我想达到的目标是:

{
"extract": "text",
"duration": "451"
}

我尝试将示例正则表达式 ("(extract)"\s*:\s*"([^"]+)",?) 与 example_parser %{data::json} 结合使用(对于初学者,使用 JSON 作为日志示例数据)但我还没有成功任何工作。

提前致谢!

示例文本的格式是否正确?最终实体 object 从末尾缺少一个 ]

entity=[HttpEntity.Strict application/json {"type":"text","extract": "text", "field2":"text2","duration": 451 }

应该是

entity=[HttpEntity.Strict application/json {"type":"text","extract": "text", "field2":"text2","duration": 451 }]

假设这是一个拼写错误并且实体字段实际上以 ] 结尾,我将继续这些说明。如果没有,我认为您需要修复底层日志以正确格式化并关闭括号。


我决定解析整个日志并显示最终结果看起来不错,而不是仅仅跳过整个日志并仅解析出 json 位。所以我们需要做的第一件事就是在请求 object:

之后拉出那组 key/value 对

示例输入:thread-191555 app.main - [cid: 2cacd6f9-546d-41ew-a7ce-d5d41b39eb8f, uid: e6ffc3b0-2f39-44f7-85b6-1abf5f9ad970] Request: protocol=[HTTP/1.0] method=[POST] path=[/metrics] headers=[Timeout-Access: <function1>, Remote-Address: 192.168.0.1:37936, Host: app:5000, Connection: close, X-Real-Ip: 192.168.1.1, X-Forwarded-For: 192.168.1.1, Authorization: ***, Accept: application/json, text/plain, */*, Referer: https://google.com, Accept-Language: cs-CZ, Accept-Encoding: gzip, deflate, User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; Trident/7.0; rv:11.0) like Gecko, Cache-Control: no-cache] entity=[HttpEntity.Strict application/json {"type":"text","extract": "text", "field2":"text2","duration": 451 }]

Grok 解析器规则:app_log thread-%{integer:thread} %{notSpace:file} - \[%{data::keyvalue(": ")}\] Request: %{data:request:keyvalue("=","","[]")}

结果:

{
  "thread": 191555,
  "file": "app.main",
  "cid": "2cacd6f9-546d-41ew-a7ce-d5d41b39eb8f",
  "uid": "e6ffc3b0-2f39-44f7-85b6-1abf5f9ad970",
  "request": {
    "protocol": "HTTP/1.0",
    "method": "POST",
    "path": "/metrics",
    "headers": "Timeout-Access: <function1>, Remote-Address: 192.168.0.1:37936, Host: app:5000, Connection: close, X-Real-Ip: 192.168.1.1, X-Forwarded-For: 192.168.1.1, Authorization: ***, Accept: application/json, text/plain, */*, Referer: https://google.com, Accept-Language: cs-CZ, Accept-Encoding: gzip, deflate, User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; Trident/7.0; rv:11.0) like Gecko, Cache-Control: no-cache",
    "entity": "HttpEntity.Strict application/json {\"type\":\"text\",\"extract\": \"text\", \"field2\":\"text2\",\"duration\": 451 }"
  }
}

注意我们如何使用带有引号字符串 [] 的键值解析器,这使我们能够轻松地从请求 object.

中提取所有内容

现在的目标是从请求中的实体字段中提取详细信息 object。使用 Grok 解析器,您可以指定特定属性以进一步解析。

因此,在同一个管道中,我们将在第一个

之后添加另一个 grok 解析器处理器

然后在 request.entity 上将高级选项部分配置为 运行,因为这就是我们所说的属性

示例输入:HttpEntity.Strict application/json {"type":"text","extract": "text", "field2":"text2","duration": 451 }

Grok 解析器规则:entity_rule %{notSpace:request.entity.class} %{notSpace:request.entity.media_type} %{data:request.entity.json:json}

结果:

{
  "request": {
    "entity": {
      "class": "HttpEntity.Strict",
      "media_type": "application/json",
      "json": {
        "duration": 451,
        "extract": "text",
        "type": "text",
        "field2": "text2"
      }
    }
  }
}

现在,当我们查看最终解析的日志时,它包含了我们需要的所有内容:


也只是因为它真的很简单,我也为 headers 块添加了第三个 grok 处理器(高级设置设置为从 request.headers 解析):

示例输入:Timeout-Access: <function1>, Remote-Address: 192.168.0.1:37936, Host: app:5000, Connection: close, X-Real-Ip: 192.168.1.1, X-Forwarded-For: 192.168.1.1, Authorization: ***, Accept: application/json, text/plain, */*, Referer: https://google.com, Accept-Language: cs-CZ, Accept-Encoding: gzip, deflate, User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; Trident/7.0; rv:11.0) like Gecko, Cache-Control: no-cache

Grok 解析器规则:headers_rule %{data:request.headers:keyvalue(": ", "/)(; :")}

结果:

{
  "request": {
    "headers": {
      "Timeout-Access": "function1",
      "Remote-Address": "192.168.0.1:37936",
      "Host": "app:5000",
      "Connection": "close",
      "X-Real-Ip": "192.168.1.1",
      "X-Forwarded-For": "192.168.1.1",
      "Accept": "application/json",
      "Referer": "https://google.com",
      "Accept-Language": "cs-CZ",
      "Accept-Encoding": "gzip",
      "User-Agent": "Mozilla/5.0 (Windows NT 10.0; WOW64; Trident/7.0; rv:11.0) like Gecko",
      "Cache-Control": "no-cache"
    }
  }
}

这里唯一棘手的一点是我必须定义一个 /)(; : 的 characterWhiteList。主要是为了处理所有这些特殊字符,这些字符位于 User-Agent 字段中。


参考文献:

只是文档和一些猜测并检查我的个人 Datadog 帐户。

https://docs.datadoghq.com/logs/processing/parsing/?tab=matcher#key-value-or-logfmt