Minimal beats => logstash => elastic 不接收 elastic 中的日志 (docker)
Minimal beats => logstash => elastic not receiving logs in elastic (docker)
TLDR
我正在尝试使用 docker 化的 Elastic 堆栈来解析 2 个日志文件。该堆栈将日志添加到 /usr/share/filebeat/scrape_logs
中的文件中,并通过 logstash 将它们存储在 elasticsearch 中。
我看到日志到达 logstash,它们显示如下,但是当我 运行 来自 kibana 的查询 GET /_cat/indices/
时,没有索引。
我已经用相关设置创建了一个github repo here,如果你想运行代码,只需运行docker-compose up
,然后echo '2021-03-15 09:58:59,255 [INFO] - i am a test' >> beat_test/log1.log
添加额外的日志。
为什么我看不到在 elasticsearch 中创建的索引?为什么日志没有编入索引?
详情
logstash | {
logstash | "host" => {
logstash | "name" => "b5bd03c1654c"
logstash | },
logstash | "@timestamp" => 2021-03-15T22:09:06.220Z,
logstash | "log" => {
logstash | "file" => {
logstash | "path" => "/usr/share/filebeat/scrape_logs/log1.log"
logstash | },
logstash | "offset" => 98
logstash | },
logstash | "input" => {
logstash | "type" => "log"
logstash | },
logstash | "tags" => [
logstash | [0] "beats_input_codec_plain_applied"
logstash | ],
logstash | "ecs" => {
logstash | "version" => "1.6.0"
logstash | },
logstash | "@version" => "1",
logstash | "agent" => {
logstash | "name" => "b5bd03c1654c",
logstash | "type" => "filebeat",
logstash | "ephemeral_id" => "e171b269-2364-47ff-bc87-3fe0bd73bf8c",
logstash | "version" => "7.11.2",
logstash | "hostname" => "b5bd03c1654c",
logstash | "id" => "97aaac06-c87f-446f-aadc-8187b155e9e9"
logstash | },
logstash | "message" => "2021-03-15 09:58:59,255 [INFO] - i am a test"
logstash | }
docker-compose.yml
version: '3.6'
services:
elasticsearch:
image: elasticsearch:7.11.1
container_name: elasticsearch
environment:
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms2g -Xmx2g"
- discovery.type=single-node
ports: ['9200:9200']
volumes:
- ./es_data:/usr/share/elasticsearch/data
kibana:
image: kibana:7.11.1
container_name: kibana
ports: ['5601:5601']
depends_on: ['elasticsearch']
logstash:
image: logstash:7.11.1
container_name: logstash
volumes:
- ./scrape_logs.conf:/usr/share/logstash/config/scrape_logs.conf
depends_on: ['elasticsearch']
filebeat:
image: docker.elastic.co/beats/filebeat:7.11.2
container_name: filebeat
user: root
command: --strict.perms=false -e
volumes:
- ./filebeat.yml:/usr/share/filebeat/filebeat.yml
- /var/run/docker.sock:/var/run/docker.sock:ro
- /var/lib/docker/containers:/var/lib/docker/containers:ro
- ./beat_test:/usr/share/filebeat/scrape_logs
depends_on: ['elasticsearch', 'kibana']
volumes:
es_data:
scrape_logs.conf
input {
beats {
port => 5044
}
}
output {
elasticsearch {
host => "elasticsearch:9200"
index => "scrape_test"
}
}
问题是因为您需要 map the Logstash pipeline configuration 到 /usr/share/logstash/pipeline
文件夹。 /usr/share/logstash/config
文件夹仅用于设置。
如果您不指定,则默认 /usr/share/logstash/pipeline/logstash.conf
管道基本上执行以下操作,这就是您在 Logstash 控制台日志中看到事件的原因:
input {
beats {
port => 5044
}
}
output {
stdout {
codec => rubydebug
}
}
因此您需要通过将 Logstash 配置修改为以下内容来替换默认管道:
logstash:
image: logstash:7.11.1
container_name: logstash
volumes:
- ./pipeline:/usr/share/logstash/pipeline
depends_on: ['elasticsearch']
您还需要创建一个名为 pipeline
的文件夹并将 scrape_logs.conf
文件移入其中。
最后,scrape_logs.conf
文件中有错字,elasticsearch
输出中的 host
设置应称为 hosts
:
output {
elasticsearch {
hosts => "elasticsearch:9200"
index => "scrape_test"
}
}
完成所有这些后,您可以重新启动您的 docker 堆栈,进入 Kibana,您将看到您的日志。
TLDR
我正在尝试使用 docker 化的 Elastic 堆栈来解析 2 个日志文件。该堆栈将日志添加到 /usr/share/filebeat/scrape_logs
中的文件中,并通过 logstash 将它们存储在 elasticsearch 中。
我看到日志到达 logstash,它们显示如下,但是当我 运行 来自 kibana 的查询 GET /_cat/indices/
时,没有索引。
我已经用相关设置创建了一个github repo here,如果你想运行代码,只需运行docker-compose up
,然后echo '2021-03-15 09:58:59,255 [INFO] - i am a test' >> beat_test/log1.log
添加额外的日志。
为什么我看不到在 elasticsearch 中创建的索引?为什么日志没有编入索引?
详情
logstash | {
logstash | "host" => {
logstash | "name" => "b5bd03c1654c"
logstash | },
logstash | "@timestamp" => 2021-03-15T22:09:06.220Z,
logstash | "log" => {
logstash | "file" => {
logstash | "path" => "/usr/share/filebeat/scrape_logs/log1.log"
logstash | },
logstash | "offset" => 98
logstash | },
logstash | "input" => {
logstash | "type" => "log"
logstash | },
logstash | "tags" => [
logstash | [0] "beats_input_codec_plain_applied"
logstash | ],
logstash | "ecs" => {
logstash | "version" => "1.6.0"
logstash | },
logstash | "@version" => "1",
logstash | "agent" => {
logstash | "name" => "b5bd03c1654c",
logstash | "type" => "filebeat",
logstash | "ephemeral_id" => "e171b269-2364-47ff-bc87-3fe0bd73bf8c",
logstash | "version" => "7.11.2",
logstash | "hostname" => "b5bd03c1654c",
logstash | "id" => "97aaac06-c87f-446f-aadc-8187b155e9e9"
logstash | },
logstash | "message" => "2021-03-15 09:58:59,255 [INFO] - i am a test"
logstash | }
docker-compose.yml
version: '3.6'
services:
elasticsearch:
image: elasticsearch:7.11.1
container_name: elasticsearch
environment:
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms2g -Xmx2g"
- discovery.type=single-node
ports: ['9200:9200']
volumes:
- ./es_data:/usr/share/elasticsearch/data
kibana:
image: kibana:7.11.1
container_name: kibana
ports: ['5601:5601']
depends_on: ['elasticsearch']
logstash:
image: logstash:7.11.1
container_name: logstash
volumes:
- ./scrape_logs.conf:/usr/share/logstash/config/scrape_logs.conf
depends_on: ['elasticsearch']
filebeat:
image: docker.elastic.co/beats/filebeat:7.11.2
container_name: filebeat
user: root
command: --strict.perms=false -e
volumes:
- ./filebeat.yml:/usr/share/filebeat/filebeat.yml
- /var/run/docker.sock:/var/run/docker.sock:ro
- /var/lib/docker/containers:/var/lib/docker/containers:ro
- ./beat_test:/usr/share/filebeat/scrape_logs
depends_on: ['elasticsearch', 'kibana']
volumes:
es_data:
scrape_logs.conf
input {
beats {
port => 5044
}
}
output {
elasticsearch {
host => "elasticsearch:9200"
index => "scrape_test"
}
}
问题是因为您需要 map the Logstash pipeline configuration 到 /usr/share/logstash/pipeline
文件夹。 /usr/share/logstash/config
文件夹仅用于设置。
如果您不指定,则默认 /usr/share/logstash/pipeline/logstash.conf
管道基本上执行以下操作,这就是您在 Logstash 控制台日志中看到事件的原因:
input {
beats {
port => 5044
}
}
output {
stdout {
codec => rubydebug
}
}
因此您需要通过将 Logstash 配置修改为以下内容来替换默认管道:
logstash:
image: logstash:7.11.1
container_name: logstash
volumes:
- ./pipeline:/usr/share/logstash/pipeline
depends_on: ['elasticsearch']
您还需要创建一个名为 pipeline
的文件夹并将 scrape_logs.conf
文件移入其中。
最后,scrape_logs.conf
文件中有错字,elasticsearch
输出中的 host
设置应称为 hosts
:
output {
elasticsearch {
hosts => "elasticsearch:9200"
index => "scrape_test"
}
}
完成所有这些后,您可以重新启动您的 docker 堆栈,进入 Kibana,您将看到您的日志。