Logstash 解析来自两个不同 filebeat 输入的数据
Logstash parsing data from two different filebeat inputs
我有一台机器,我在上面设置了 Elasticsearch 和 Logstash,并通过另一台机器的 Filebeat 将日志传送到那里。我想添加一台新机器,从中我可以将日志发送到 Logstash,解析它们并存储在同一个 elasticsearch 索引中。
我尝试在具有相同 Logstash 输出的新机器上配置 filebeat,但 logstash 似乎没有从多个来源接收数据...
logstash 配置文件:
input {
beats {
port => 5044
}
}
filter {
grok { match => { "message" => "%{IPORHOST:clientip} %{HTTPDUSER:ident} %{USER:auth} \[%{HTTPDATE:timestamp}\] \[%{NOTSPACE:referrer}\] \"(?:%{WORD:verb} %{NOTSPACE:request}(?: HTTP/%{NUMBER:httpversion})?|%{DATA:rawrequest})\" %{NUMBER:response} (?:%{NUMBER:bytes}|-)"} }
grok { match => { "referrer" => "%{WORD:protocol}://%{WORD:domain1}.%{WORD:domain2}.%{WORD:domain3}:%{INT:port}" }
}
geoip { source => "clientip" }
}
output {
elasticsearch {
hosts => ["localhost:9200"]
index => "my_index"
}
}
Filebeat 配置文件
# This file is an example configuration file highlighting only the most common
# options. The filebeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html
# For more available modules and options, please see the filebeat.reference.yml sample
# configuration file.
#=========================== Filebeat inputs =============================
filebeat.inputs:
# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.
- type: log
# Change to true to enable this input configuration.
#enabled: false
enabled: true
# Paths that should be crawled and fetched. Glob based paths.
paths:
#- /var/log/*.log
- /var/log/hostname/proxy1/app/nginx.log
- /var/log/hostname/proxy2/app/nginx.log
#- c:\programdata\elasticsearch\logs\*
# Exclude lines. A list of regular expressions to match. It drops the lines that are
# matching any regular expression from the list.
#exclude_lines: ['^DBG']
# Include lines. A list of regular expressions to match. It exports the lines that are
# matching any regular expression from the list.
#include_lines: ['^ERR', '^WARN']
# Exclude files. A list of regular expressions to match. Filebeat drops the files that
# are matching any regular expression from the list. By default, no files are dropped.
#exclude_files: ['.gz$']
# Optional additional fields. These fields can be freely picked
# to add additional information to the crawled log files for filtering
#fields:
# level: debug
# review: 1
### Multiline options
# Multiline can be used for log messages spanning multiple lines. This is common
# for Java Stack Traces or C-Line Continuation
# The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
#multiline.pattern: ^\[
# Defines if the pattern set under pattern should be negated or not. Default is false.
#multiline.negate: false
# Match can be set to "after" or "before". It is used to define if lines should be append to a pattern
# that was (not) matched before or after or as long as a pattern is not matched based on negate.
# Note: After is the equivalent to previous and before is the equivalent to to next in Logstash
#multiline.match: after
#============================= Filebeat modules ===============================
filebeat.config.modules:
# Glob pattern for configuration loading
path: ${path.config}/modules.d/*.yml
# Set to true to enable config reloading
reload.enabled: false
# Period on which files under path should be checked for changes
#reload.period: 10s
#==================== Elasticsearch template setting ==========================
setup.template.settings:
index.number_of_shards: 1
#index.codec: best_compression
#_source.enabled: false
#================================ General =====================================
# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:
# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]
# Optional fields that you can specify to add additional information to the
# output.
#fields:
# env: staging
#============================== Dashboards =====================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here or by using the `setup` command.
#setup.dashboards.enabled: false
# The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:
#============================== Kibana =====================================
# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:
# Kibana Host
# Scheme and port can be left out and will be set to the default (http and 5601)
# In case you specify and additional path, the scheme is required: http://localhost:5601/path
# IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
#host: "localhost:5601"
# Kibana Space ID
# ID of the Kibana Space into which the dashboards should be loaded. By default,
# the Default Space will be used.
#space.id:
#============================= Elastic Cloud ==================================
# These settings simplify using Filebeat with the Elastic Cloud (https://cloud.elastic.co/).
# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:
# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:
#================================ Outputs =====================================
# Configure what output to use when sending the data collected by the beat.
#-------------------------- Elasticsearch output ------------------------------
#output.elasticsearch:
# Array of hosts to connect to.
#hosts: ["localhost:9200"]
# Optional protocol and basic auth credentials.
#protocol: "https"
#username: "elastic"
#password: "changeme"
#----------------------------- Logstash output --------------------------------
output.logstash:
# The Logstash hosts
hosts: ["logstash:5045"]
# Optional SSL. By default is off.
# List of root certificates for HTTPS server verifications
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
# Certificate for SSL client authentication
#ssl.certificate: "/etc/pki/client/cert.pem"
# Client Certificate Key
#ssl.key: "/etc/pki/client/cert.key"
#================================ Processors =====================================
# Configure processors to enhance or manipulate events generated by the beat.
processors:
- add_host_metadata: ~
- add_cloud_metadata: ~
- add_docker_metadata: ~
- add_kubernetes_metadata: ~
#================================ Logging =====================================
# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
#logging.level: debug
# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publish", "service".
#logging.selectors: ["*"]
感谢任何建议!
您应该编辑 filebeat.yml
中的输出部分,如下所示:
output.logstash:
# The Logstash hosts
hosts: ["Logstash_server_private_ip:5044"]
Logstash 需要来自端口 5044 的数据,而不是来自 5045 的数据。
使用 logstash 管道
注意:如果 xpack 基本安全性未启用,则 ES 不需要用户名和密码(删除这些行)
在目录 /etc/logstash/conf.d
您可以在不同的端口上写入多个 conf
gunicorn.log
input {
beats {
port => "5044"
}
}
output {
# stdout { codec => rubydebug }
elasticsearch {
hosts => ["xx.xx.xx.xx:xxxx"]
user => ""
password => "*******"
index => "gunicorn"
}
}
access.log
input {
beats {
port => "5047"
}
}
output {
# stdout { codec => rubydebug }
elasticsearch {
hosts => ["xxxxxxx:xxxx"]
user => "*********"
password => "*********"
index => "access"
}
}
在目录 /etc/logstash
-> pipelines.yml
- pipeline.id: gunicorn
path.config: "/etc/logstash/conf.d/gunicorn.conf"
- pipeline.id: access
path.config: "/etc/logstash/conf.d/access.conf"
在机器上 - 1 在目录 /etc/filebeat
filebeat.yml
filebeat.inputs:
- type: log
paths:
- "/home/ubuntu/data/gunicorn.log"
queue.mem:
events: 8000
flush.min_events: 2000
flush.timeout: 10s
output.logstash:
hosts: ["logstash public IP:5044"]
在机器上 - 2 在目录 /etc/filebeat
filebeat.yml
filebeat.inputs:
- type: log
paths:
- "/home/ubuntu/data/access.log"
queue.mem:
events: 8000
flush.min_events: 2000
flush.timeout: 10s
output.logstash:
hosts: ["logstash public IP:5047"]
我有一台机器,我在上面设置了 Elasticsearch 和 Logstash,并通过另一台机器的 Filebeat 将日志传送到那里。我想添加一台新机器,从中我可以将日志发送到 Logstash,解析它们并存储在同一个 elasticsearch 索引中。
我尝试在具有相同 Logstash 输出的新机器上配置 filebeat,但 logstash 似乎没有从多个来源接收数据...
logstash 配置文件:
input {
beats {
port => 5044
}
}
filter {
grok { match => { "message" => "%{IPORHOST:clientip} %{HTTPDUSER:ident} %{USER:auth} \[%{HTTPDATE:timestamp}\] \[%{NOTSPACE:referrer}\] \"(?:%{WORD:verb} %{NOTSPACE:request}(?: HTTP/%{NUMBER:httpversion})?|%{DATA:rawrequest})\" %{NUMBER:response} (?:%{NUMBER:bytes}|-)"} }
grok { match => { "referrer" => "%{WORD:protocol}://%{WORD:domain1}.%{WORD:domain2}.%{WORD:domain3}:%{INT:port}" }
}
geoip { source => "clientip" }
}
output {
elasticsearch {
hosts => ["localhost:9200"]
index => "my_index"
}
}
Filebeat 配置文件
# This file is an example configuration file highlighting only the most common
# options. The filebeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html
# For more available modules and options, please see the filebeat.reference.yml sample
# configuration file.
#=========================== Filebeat inputs =============================
filebeat.inputs:
# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.
- type: log
# Change to true to enable this input configuration.
#enabled: false
enabled: true
# Paths that should be crawled and fetched. Glob based paths.
paths:
#- /var/log/*.log
- /var/log/hostname/proxy1/app/nginx.log
- /var/log/hostname/proxy2/app/nginx.log
#- c:\programdata\elasticsearch\logs\*
# Exclude lines. A list of regular expressions to match. It drops the lines that are
# matching any regular expression from the list.
#exclude_lines: ['^DBG']
# Include lines. A list of regular expressions to match. It exports the lines that are
# matching any regular expression from the list.
#include_lines: ['^ERR', '^WARN']
# Exclude files. A list of regular expressions to match. Filebeat drops the files that
# are matching any regular expression from the list. By default, no files are dropped.
#exclude_files: ['.gz$']
# Optional additional fields. These fields can be freely picked
# to add additional information to the crawled log files for filtering
#fields:
# level: debug
# review: 1
### Multiline options
# Multiline can be used for log messages spanning multiple lines. This is common
# for Java Stack Traces or C-Line Continuation
# The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
#multiline.pattern: ^\[
# Defines if the pattern set under pattern should be negated or not. Default is false.
#multiline.negate: false
# Match can be set to "after" or "before". It is used to define if lines should be append to a pattern
# that was (not) matched before or after or as long as a pattern is not matched based on negate.
# Note: After is the equivalent to previous and before is the equivalent to to next in Logstash
#multiline.match: after
#============================= Filebeat modules ===============================
filebeat.config.modules:
# Glob pattern for configuration loading
path: ${path.config}/modules.d/*.yml
# Set to true to enable config reloading
reload.enabled: false
# Period on which files under path should be checked for changes
#reload.period: 10s
#==================== Elasticsearch template setting ==========================
setup.template.settings:
index.number_of_shards: 1
#index.codec: best_compression
#_source.enabled: false
#================================ General =====================================
# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:
# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]
# Optional fields that you can specify to add additional information to the
# output.
#fields:
# env: staging
#============================== Dashboards =====================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here or by using the `setup` command.
#setup.dashboards.enabled: false
# The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:
#============================== Kibana =====================================
# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:
# Kibana Host
# Scheme and port can be left out and will be set to the default (http and 5601)
# In case you specify and additional path, the scheme is required: http://localhost:5601/path
# IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
#host: "localhost:5601"
# Kibana Space ID
# ID of the Kibana Space into which the dashboards should be loaded. By default,
# the Default Space will be used.
#space.id:
#============================= Elastic Cloud ==================================
# These settings simplify using Filebeat with the Elastic Cloud (https://cloud.elastic.co/).
# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:
# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:
#================================ Outputs =====================================
# Configure what output to use when sending the data collected by the beat.
#-------------------------- Elasticsearch output ------------------------------
#output.elasticsearch:
# Array of hosts to connect to.
#hosts: ["localhost:9200"]
# Optional protocol and basic auth credentials.
#protocol: "https"
#username: "elastic"
#password: "changeme"
#----------------------------- Logstash output --------------------------------
output.logstash:
# The Logstash hosts
hosts: ["logstash:5045"]
# Optional SSL. By default is off.
# List of root certificates for HTTPS server verifications
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
# Certificate for SSL client authentication
#ssl.certificate: "/etc/pki/client/cert.pem"
# Client Certificate Key
#ssl.key: "/etc/pki/client/cert.key"
#================================ Processors =====================================
# Configure processors to enhance or manipulate events generated by the beat.
processors:
- add_host_metadata: ~
- add_cloud_metadata: ~
- add_docker_metadata: ~
- add_kubernetes_metadata: ~
#================================ Logging =====================================
# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
#logging.level: debug
# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publish", "service".
#logging.selectors: ["*"]
感谢任何建议!
您应该编辑 filebeat.yml
中的输出部分,如下所示:
output.logstash:
# The Logstash hosts
hosts: ["Logstash_server_private_ip:5044"]
Logstash 需要来自端口 5044 的数据,而不是来自 5045 的数据。
使用 logstash 管道
注意:如果 xpack 基本安全性未启用,则 ES 不需要用户名和密码(删除这些行)
在目录 /etc/logstash/conf.d
您可以在不同的端口上写入多个 conf
gunicorn.log
input {
beats {
port => "5044"
}
}
output {
# stdout { codec => rubydebug }
elasticsearch {
hosts => ["xx.xx.xx.xx:xxxx"]
user => ""
password => "*******"
index => "gunicorn"
}
}
access.log
input {
beats {
port => "5047"
}
}
output {
# stdout { codec => rubydebug }
elasticsearch {
hosts => ["xxxxxxx:xxxx"]
user => "*********"
password => "*********"
index => "access"
}
}
在目录 /etc/logstash
-> pipelines.yml
- pipeline.id: gunicorn
path.config: "/etc/logstash/conf.d/gunicorn.conf"
- pipeline.id: access
path.config: "/etc/logstash/conf.d/access.conf"
在机器上 - 1 在目录 /etc/filebeat
filebeat.yml
filebeat.inputs:
- type: log
paths:
- "/home/ubuntu/data/gunicorn.log"
queue.mem:
events: 8000
flush.min_events: 2000
flush.timeout: 10s
output.logstash:
hosts: ["logstash public IP:5044"]
在机器上 - 2 在目录 /etc/filebeat
filebeat.yml
filebeat.inputs:
- type: log
paths:
- "/home/ubuntu/data/access.log"
queue.mem:
events: 8000
flush.min_events: 2000
flush.timeout: 10s
output.logstash:
hosts: ["logstash public IP:5047"]