将日志严重性与 Google Compute Engine 和 Cloud Logging 代理一起使用

Use logs severity with Google Compute Engine and the Cloud Logging agent

我想在 Compute Engine 上使用 Google Cloud Logging agent 和 linux (Debian) VM 运行 的日志严重性。

Compute Engine 实例是 debian-9 n2-standard-4 机器。

我已经按照 the documentation 安装了 Cloud Logging 代理。

$ curl -sSO https://dl.google.com/cloudagents/add-logging-agent-repo.sh
$ sudo bash add-logging-agent-repo.sh
$ sudo apt-get install google-fluentd
$ sudo apt-get install -y google-fluentd-catch-all-config-structured
$ sudo service google-fluentd start

并且根据 this paragraph,如果日志行是序列化的 JSON 对象并且选项 detect_json 设置为 true,我们可以使用日志严重性。

所以我记录了类似下面的内容,但不幸的是我在 GCP 中没有任何严重性。

$ logger '{"severity":"ERROR","message":"This is an error"}'

但我希望是这样的:

我不介意日志条目的类型是 textPayload 还是 jsonPayload。

文件/etc/google-fluentd/google-fluentd.conf启用了detect_json:

$ cat /etc/google-fluentd/google-fluentd.conf 
# Master configuration file for google-fluentd

# Include any configuration files in the config.d directory.
#
# An example "catch-all" configuration can be found at
# https://github.com/GoogleCloudPlatform/fluentd-catch-all-config
@include config.d/*.conf

# Prometheus monitoring.
<source>
  @type prometheus
  port 24231
</source>
<source>
  @type prometheus_monitor
</source>

# Do not collect fluentd's own logs to avoid infinite loops.
<match fluent.**>
  @type null
</match>

# Add a unique insertId to each log entry that doesn't already have it.
# This helps guarantee the order and prevent log duplication.
<filter **>
  @type add_insert_ids
</filter>

# Configure all sources to output to Google Cloud Logging
<match **>
  @type google_cloud
  buffer_type file
  buffer_path /var/log/google-fluentd/buffers
  # Set the chunk limit conservatively to avoid exceeding the recommended
  # chunk size of 5MB per write request.
  buffer_chunk_limit 512KB
  # Flush logs every 5 seconds, even if the buffer is not full.
  flush_interval 5s
  # Enforce some limit on the number of retries.
  disable_retry_limit false
  # After 3 retries, a given chunk will be discarded.
  retry_limit 3
  # Wait 10 seconds before the first retry. The wait interval will be doubled on
  # each following retry (20s, 40s...) until it hits the retry limit.
  retry_wait 10
  # Never wait longer than 5 minutes between retries. If the wait interval
  # reaches this limit, the exponentiation stops.
  # Given the default config, this limit should never be reached, but if
  # retry_limit and retry_wait are customized, this limit might take effect.
  max_retry_wait 300
  # Use multiple threads for processing.
  num_threads 8
  # Use the gRPC transport.
  use_grpc true
  # If a request is a mix of valid log entries and invalid ones, ingest the
  # valid ones and drop the invalid ones instead of dropping everything.
  partial_success true
  # Enable monitoring via Prometheus integration.
  enable_monitoring true
  monitoring_type opencensus
  detect_json true
</match>

文件/etc/google-fluentd/config.d/syslog.conf:

$ cat /etc/google-fluentd/config.d/syslog.conf
<source>
  @type tail

  # Parse the timestamp, but still collect the entire line as 'message'
  format syslog

  path /var/log/syslog
  pos_file /var/lib/google-fluentd/pos/syslog.pos
  read_from_head true
  tag syslog
</source>

我错过了什么?

注意:我知道 glcoud workaround,但它并不理想,因为它记录资源类型 'Global' 下的所有内容,而不是在我的 VM 中。

Logger 使用 syslog,而 syslog "parse the timestamp, but still collect the entire line as 'message' "

/etc/google-fluentd/config.d/syslog.conf

所述

在您的情况下,您可以使用 streaming structured logs via structured-log files 和 json 格式的日志严重性。

这是

的结果

echo '{"severity":"ERROR","message":"This is an error"}' >> /tmp/test-structured-log.log