为什么 bitnami Fluentd daemonsets 不生成日志作为标准输出?
Why bitnami Fluentd daemonsets dont generate logs as standard output?
我已经在 K8s 集群上部署了 Bitnami EFK helm chart。
https://github.com/bitnami/charts/tree/master/bitnami/fluentd
所有 pod 运行正常,但 Fluentd 未显示任何日志。我不知道配置中是否缺少某些内容。但是,集群受到限制,不知道这是否有任何区别。我在具有相同配置的无限制集群上部署了相同的 EFK,并且工作得很好。
kkot@ltp-str-00-0085:~/logging-int$ kk get pod
NAME READY STATUS RESTARTS AGE
elasticsearch-elasticsearch-coordinating-only-5f5656cdd5-9d4lj 1/1 Running 0 6h34m
elasticsearch-elasticsearch-coordinating-only-5f5656cdd5-h6lbd 1/1 Running 0 6h34m
elasticsearch-elasticsearch-data-0 1/1 Running 0 6h34m
elasticsearch-elasticsearch-data-1 1/1 Running 0 6h34m
elasticsearch-elasticsearch-master-0 1/1 Running 0 6h34m
elasticsearch-elasticsearch-master-1 1/1 Running 0 6h34m
fluentd-0 1/1 Running 0 6h10m
fluentd-4glgs 1/1 Running 2 6h10m
fluentd-59tzz 1/1 Running 0 5h43m
fluentd-b8bc8 1/1 Running 2 6h10m
fluentd-qfdcs 1/1 Running 2 6h10m
fluentd-sf2hk 1/1 Running 2 6h10m
fluentd-trvwx 1/1 Running 0 95s
fluentd-tzqw8 1/1 Running 2 6h10m
kibana-656d55f94d-8qf8f 1/1 Running 0 6h28m
kkot@ltp-str-00-0085:~/logging-int$ kk logs fluentd-qfdcs
错误日志:
2021-02-24 10:52:15 +0000 [warn]: #0 pattern not matched: "{\"log\":\"2021-02-24 10:52:13 +0000 [warn]: #0 pattern not matched: \"{\\"log\\":\\"
有没有人遇到过同样的问题?谢谢
能否分享一下您的货代使用的是什么配置?
在最新版本的图表(3.6.2)中默认使用如下:
configMapFiles:
fluentd.conf: |
# Ignore fluentd own events
<match fluent.**>
@type null
</match>
@include fluentd-inputs.conf
@include fluentd-output.conf
{{- if .Values.metrics.enabled }}
@include metrics.conf
{{- end }}
fluentd-inputs.conf: |
# HTTP input for the liveness and readiness probes
<source>
@type http
port 9880
</source>
# Get the logs from the containers running in the node
<source>
@type tail
path /var/log/containers/*.log
# exclude Fluentd logs
exclude_path /var/log/containers/*fluentd*.log
pos_file /opt/bitnami/fluentd/logs/buffers/fluentd-docker.pos
tag kubernetes.*
read_from_head true
<parse>
@type json
</parse>
</source>
# enrich with kubernetes metadata
<filter kubernetes.**>
@type kubernetes_metadata
</filter>
根据您分享的错误日志:
2021-02-24 10:52:15 +0000 [warn]: #0 pattern not matched: "{\"log\":\"2021-02-24 10:52:13 +0000 [warn]: #0 pattern not matched: \"{\\"log\\":\\"
我注意到两件事:
- fluentd pods 似乎在收集他们自己的日志,这不应该发生,因为:
# exclude Fluentd logs
exclude_path /var/log/containers/*fluentd*.log
- JSON 日志未被解析,尽管已配置:
<parse>
@type json
</parse>
也许您在 values.yaml
中省略了 configMapFiles?
我使用的是 fluentd 的 3.6.2 版,并且我使用了来自 values.yaml 的转发器和聚合器的默认 ConfigMapFiles。经过如此多的试验和错误后,我能够看到一些日志,但不幸的是,自 1970.01.01 年以来,当然在 kibana 中,我看到了 1970.01.01
的 logstash 日志
kkot@ltp-str-00-0085:~/logging-int$ kubectl logs -l "app.kubernetes.io/component=aggregator"
1970-01-01 00:33:41.267229471 +0000 kubernetes.var.log.containers.payscan-56f94ddbcd-5fgqz_sgkb-r200-test_payscan-6a123dce39ade96b45ac156b69ea08f1bbc63382840782937dbeee8978d0f4dc.log: {"log": "\tat com.zaxxer.hikari.pool.PoolBase.newPoolEntry(PoolBase.java:194) ~[HikariCP-2.7.9.jar!/:?]\n","stream":"stdout","docker":{"container_id":"6a123dce39ade96b45ac156b69ea08f1bbc63382840782937dbeee8978d0f4dc"},"kubernetes":{"container_name":"payscan","namespace_name":"sgkb-r200-test")
如果我在 fluentd 容器中执行,那么 DATE 对我来说似乎没问题。
kkot@ltp-str-00-0085:~/logging-int$ kk exec -it fluentd-9kdp4 -- bash
root@fluentd-9kdp4:/opt/bitnami/fluentd# date
Mon Mar 1 07:20:20 UTC 2021
有没有办法解决这个奇怪的日期错误?
更新:回答
<parse>
@type json
time_format %Y-%m-%dT%H:%M:%S.%NZ
</parse>
我已经在 K8s 集群上部署了 Bitnami EFK helm chart。 https://github.com/bitnami/charts/tree/master/bitnami/fluentd
所有 pod 运行正常,但 Fluentd 未显示任何日志。我不知道配置中是否缺少某些内容。但是,集群受到限制,不知道这是否有任何区别。我在具有相同配置的无限制集群上部署了相同的 EFK,并且工作得很好。
kkot@ltp-str-00-0085:~/logging-int$ kk get pod
NAME READY STATUS RESTARTS AGE
elasticsearch-elasticsearch-coordinating-only-5f5656cdd5-9d4lj 1/1 Running 0 6h34m
elasticsearch-elasticsearch-coordinating-only-5f5656cdd5-h6lbd 1/1 Running 0 6h34m
elasticsearch-elasticsearch-data-0 1/1 Running 0 6h34m
elasticsearch-elasticsearch-data-1 1/1 Running 0 6h34m
elasticsearch-elasticsearch-master-0 1/1 Running 0 6h34m
elasticsearch-elasticsearch-master-1 1/1 Running 0 6h34m
fluentd-0 1/1 Running 0 6h10m
fluentd-4glgs 1/1 Running 2 6h10m
fluentd-59tzz 1/1 Running 0 5h43m
fluentd-b8bc8 1/1 Running 2 6h10m
fluentd-qfdcs 1/1 Running 2 6h10m
fluentd-sf2hk 1/1 Running 2 6h10m
fluentd-trvwx 1/1 Running 0 95s
fluentd-tzqw8 1/1 Running 2 6h10m
kibana-656d55f94d-8qf8f 1/1 Running 0 6h28m
kkot@ltp-str-00-0085:~/logging-int$ kk logs fluentd-qfdcs
错误日志:
2021-02-24 10:52:15 +0000 [warn]: #0 pattern not matched: "{\"log\":\"2021-02-24 10:52:13 +0000 [warn]: #0 pattern not matched: \"{\\"log\\":\\"
有没有人遇到过同样的问题?谢谢
能否分享一下您的货代使用的是什么配置?
在最新版本的图表(3.6.2)中默认使用如下:
configMapFiles:
fluentd.conf: |
# Ignore fluentd own events
<match fluent.**>
@type null
</match>
@include fluentd-inputs.conf
@include fluentd-output.conf
{{- if .Values.metrics.enabled }}
@include metrics.conf
{{- end }}
fluentd-inputs.conf: |
# HTTP input for the liveness and readiness probes
<source>
@type http
port 9880
</source>
# Get the logs from the containers running in the node
<source>
@type tail
path /var/log/containers/*.log
# exclude Fluentd logs
exclude_path /var/log/containers/*fluentd*.log
pos_file /opt/bitnami/fluentd/logs/buffers/fluentd-docker.pos
tag kubernetes.*
read_from_head true
<parse>
@type json
</parse>
</source>
# enrich with kubernetes metadata
<filter kubernetes.**>
@type kubernetes_metadata
</filter>
根据您分享的错误日志:
2021-02-24 10:52:15 +0000 [warn]: #0 pattern not matched: "{\"log\":\"2021-02-24 10:52:13 +0000 [warn]: #0 pattern not matched: \"{\\"log\\":\\"
我注意到两件事:
- fluentd pods 似乎在收集他们自己的日志,这不应该发生,因为:
# exclude Fluentd logs exclude_path /var/log/containers/*fluentd*.log
- JSON 日志未被解析,尽管已配置:
<parse> @type json </parse>
也许您在 values.yaml
中省略了 configMapFiles?
我使用的是 fluentd 的 3.6.2 版,并且我使用了来自 values.yaml 的转发器和聚合器的默认 ConfigMapFiles。经过如此多的试验和错误后,我能够看到一些日志,但不幸的是,自 1970.01.01 年以来,当然在 kibana 中,我看到了 1970.01.01
的 logstash 日志kkot@ltp-str-00-0085:~/logging-int$ kubectl logs -l "app.kubernetes.io/component=aggregator"
1970-01-01 00:33:41.267229471 +0000 kubernetes.var.log.containers.payscan-56f94ddbcd-5fgqz_sgkb-r200-test_payscan-6a123dce39ade96b45ac156b69ea08f1bbc63382840782937dbeee8978d0f4dc.log: {"log": "\tat com.zaxxer.hikari.pool.PoolBase.newPoolEntry(PoolBase.java:194) ~[HikariCP-2.7.9.jar!/:?]\n","stream":"stdout","docker":{"container_id":"6a123dce39ade96b45ac156b69ea08f1bbc63382840782937dbeee8978d0f4dc"},"kubernetes":{"container_name":"payscan","namespace_name":"sgkb-r200-test")
如果我在 fluentd 容器中执行,那么 DATE 对我来说似乎没问题。
kkot@ltp-str-00-0085:~/logging-int$ kk exec -it fluentd-9kdp4 -- bash
root@fluentd-9kdp4:/opt/bitnami/fluentd# date
Mon Mar 1 07:20:20 UTC 2021
有没有办法解决这个奇怪的日期错误? 更新:回答
<parse>
@type json
time_format %Y-%m-%dT%H:%M:%S.%NZ
</parse>