如何使用 Filebeat 收集 docker 日志?
How to collect docker logs using Filebeats?
我正在尝试从 docker 容器中收集此类日志:
[1620579277][642e7adc-74e1-4b89-a705-d271846f7ebc][channel1]
[afca2a976fa482f429fff4a38e2ea49f337a8af1b5dca0de90410ecc792fd5a4][usecase_cc][set] ex02 set
[1620579277][ac9f99b7-0126-45ed-8a74-6adc3a9d6bc5][channel1]
[afca2a976fa482f429fff4a38e2ea49f337a8af1b5dca0de90410ecc792fd5a4][usecase_cc][set][Transaction] Aval
=201 Bval =301 after performing the transaction
[1620579277][9211a9d4-3fe6-49db-b245-91ddd3a11cd3][channel1]
[afca2a976fa482f429fff4a38e2ea49f337a8af1b5dca0de90410ecc792fd5a4][usecase_cc][set][Transaction]
Transaction makes payment of X units from A to B
[1620579280][0391d2ce-06c1-481b-9140-e143067a9c2d][channel1]
[1f5752224da4481e1dc4d23dec0938fd65f6ae7b989aaa26daa6b2aeea370084][usecase_cc][get] Query Response:
{"Name":"a","Amount":"200"}
我是这样设置filebeat.yml的:
filebeat.inputs:
- type: container
paths:
- '/var/lib/docker/containers/container-id/container-id.log'
processors:
- add_docker_metadata:
host: "unix:///var/run/docker.sock"
- dissect:
tokenizer: '{"log":"[%{time}][%{uuid}][%{channel}][%{id}][%{chaincode}][%{method}] %{specificinfo}\"\n%{}'
field: "message"
target_prefix: ""
output.elasticsearch:
hosts: ["elasticsearch:9200"]
username: "elastic"
password: "changeme"
indices:
- index: "filebeat-%{[agent.version]}-%{+yyyy.MM.dd}"
logging.json: true
logging.metrics.enabled: false
虽然 elasticsearch 和 kibana 部署成功,但在生成新日志时出现此错误:
{"error":{"root_cause":[{"type":"index_not_found_exception","reason":"no such index
[filebeat]","resource.type":"index_or_alias","resource.id":"filebeat","index_uuid":"_na_",
"index":"filebeat"}],"type":"index_not_found_exception","reason":"no such index
[filebeat]","resource.type":"index_or_alias","resource.id":"filebeat","index_uuid":"_na_",
"index":"filebeat"},"status":404}
注意:我使用的是7.12.1版本,Kibana、Elastichsearch和Logstash部署在docker。
我使用 logstash 作为替代方式而不是 filebeat。但是,由于在filebeat配置文件中错误地映射了获取日志的路径而犯了一个错误。解决这个问题
- 我创建了一个环境变量来指向正确的位置:
- 我将环境变量作为 docker 卷的一部分传递:
- 我已经把配置文件的路径指向了容器内volume的路径:
我正在尝试从 docker 容器中收集此类日志:
[1620579277][642e7adc-74e1-4b89-a705-d271846f7ebc][channel1]
[afca2a976fa482f429fff4a38e2ea49f337a8af1b5dca0de90410ecc792fd5a4][usecase_cc][set] ex02 set
[1620579277][ac9f99b7-0126-45ed-8a74-6adc3a9d6bc5][channel1]
[afca2a976fa482f429fff4a38e2ea49f337a8af1b5dca0de90410ecc792fd5a4][usecase_cc][set][Transaction] Aval
=201 Bval =301 after performing the transaction
[1620579277][9211a9d4-3fe6-49db-b245-91ddd3a11cd3][channel1]
[afca2a976fa482f429fff4a38e2ea49f337a8af1b5dca0de90410ecc792fd5a4][usecase_cc][set][Transaction]
Transaction makes payment of X units from A to B
[1620579280][0391d2ce-06c1-481b-9140-e143067a9c2d][channel1]
[1f5752224da4481e1dc4d23dec0938fd65f6ae7b989aaa26daa6b2aeea370084][usecase_cc][get] Query Response:
{"Name":"a","Amount":"200"}
我是这样设置filebeat.yml的:
filebeat.inputs:
- type: container
paths:
- '/var/lib/docker/containers/container-id/container-id.log'
processors:
- add_docker_metadata:
host: "unix:///var/run/docker.sock"
- dissect:
tokenizer: '{"log":"[%{time}][%{uuid}][%{channel}][%{id}][%{chaincode}][%{method}] %{specificinfo}\"\n%{}'
field: "message"
target_prefix: ""
output.elasticsearch:
hosts: ["elasticsearch:9200"]
username: "elastic"
password: "changeme"
indices:
- index: "filebeat-%{[agent.version]}-%{+yyyy.MM.dd}"
logging.json: true
logging.metrics.enabled: false
虽然 elasticsearch 和 kibana 部署成功,但在生成新日志时出现此错误:
{"error":{"root_cause":[{"type":"index_not_found_exception","reason":"no such index
[filebeat]","resource.type":"index_or_alias","resource.id":"filebeat","index_uuid":"_na_",
"index":"filebeat"}],"type":"index_not_found_exception","reason":"no such index
[filebeat]","resource.type":"index_or_alias","resource.id":"filebeat","index_uuid":"_na_",
"index":"filebeat"},"status":404}
注意:我使用的是7.12.1版本,Kibana、Elastichsearch和Logstash部署在docker。
我使用 logstash 作为替代方式而不是 filebeat。但是,由于在filebeat配置文件中错误地映射了获取日志的路径而犯了一个错误。解决这个问题
- 我创建了一个环境变量来指向正确的位置:
- 我将环境变量作为 docker 卷的一部分传递:
- 我已经把配置文件的路径指向了容器内volume的路径: