SYSLOG-NG:将相同的日志发送到弹性搜索中的两个不同索引
SYSLOG-NG: Sending same log to two different index in elasticsearch
我试图将相同的日志流发送到两个不同的 elasticsearch 索引,因为每个索引的用户角色不同。
我也使用一个文件作为目标。这是一个示例:
2021-02-12T14:00:00+01:00 192.168.89.222 <30>1 2021-02-12T14:00:01+01:00 sonda filebeat 474 - - 2021-02-12T14:00:01.364+0100 DEBUG [input] input/input.go:152 Run input
2021-02-12T14:00:00+01:00 192.168.89.222 <30>1 2021-02-12T14:00:01+01:00 sonda filebeat 474 - - 2021-02-12T14:00:01.364+0100 DEBUG [input] log/input.go:191 Start next scan
2021-02-12T14:00:00+01:00 192.168.89.222 <30>1 2021-02-12T14:00:01+01:00 sensor filebeat 474 - - 2021-02-12T14:00:01.364+0100 DEBUG [input] log/input.go:421 Check file for harvesting: /opt/zeek/logs/current/weird.log
当我使用 elasticsearch-http 只有一个目的地时,两个配置中的任何一个,一切正常,但是当使用 两个目的地时, syslog-ng 无法启动并且 systemcl 抱怨。
这是我的 /etc/syslog-ng/syslog-ng.conf
文件:
@version: 3.27
@include "scl.conf"
options { chain_hostnames(off); flush_lines(0); use_dns(no); use_fqdn(no);
dns_cache(no); owner("root"); group("adm"); perm(0640);
stats_freq(0); bad_hostname("^gconfd$");
};
source s_net {
udp(
ip(0.0.0.0)
port(514)
flags(no-parse)
);
};
log {
source(s_net);
destination(d_es);
destination(d_es_other_index); ######## comment this to avoid the error
destination(d_file);
};
template t_demo_filetemplate {
template("${ISODATE} ${HOST} ${MESSAGE}\n");
};
destination d_file {
file("/tmp/test.log" template(t_demo_filetemplate));
};
destination d_es{
elasticsearch-http(
index("syslog-ng-${YEAR}-${MONTH}-${DAY}")
url("https://192.168.89.44:9200/_bulk" "https://192.168.89.144:9200/_bulk")
type("")
user("elastic")
password("password")
batch_lines(128)
batch_timeout(10000)
timeout(100)
template("$(format-json --scope rfc5424 --scope nv-pairs --exclude DATE --key ISODATE)")
time-zone("UTC")
tls(
ca-file("/root/elastic_certs/elastic-ca.crt")
cert-file("/root/elastic_certs/elastic.crt")
key-file("/root/elastic_certs/elastic.key")
peer-verify(no)
)
);
};
destination d_es_other_index{
elasticsearch-http(
index("otherindex-${YEAR}-${MONTH}-${DAY}")
url("https://192.168.89.44:9200/_bulk" "https://192.168.89.144:9200/_bulk")
type("")
user("elastic")
password("password")
batch_lines(128)
batch_timeout(10000)
timeout(100)
template("$(format-json --scope rfc5424 --scope nv-pairs --exclude DATE --key ISODATE)")
time-zone("UTC")
tls(
ca-file("/root/elastic_certs/elastic-ca.crt")
cert-file("/root/elastic_certs/elastic.crt")
key-file("/root/elastic_certs/elastic.key")
peer-verify(no)
)
);
};
我在使用两个 elasticsearch 目标时得到的错误(journalctl -xe 似乎没有显示相关信息):
# systemctl restart syslog-ng.service
Job for syslog-ng.service failed because the control process exited with error code.
See "systemctl status syslog-ng.service" and "journalctl -xe" for details.
还有我的 syslog-ng 信息:
$ syslog-ng --version
syslog-ng 3 (3.27.1)
Config version: 3.22
Installer-Version: 3.27.1
Revision: 3.27.1-3build1
Compile-Date: Jul 30 2020 17:56:17
Module-Directory: /usr/lib/syslog-ng/3.27
Module-Path: /usr/lib/syslog-ng/3.27
Include-Path: /usr/share/syslog-ng/include
Available-Modules: syslogformat,afsql,linux-kmsg-format,stardate,affile,dbparser,geoip2-plugin,afprog,kafka,graphite,riemann,tfgetent,json-plugin,cef,hook-commands,basicfuncs,disk-buffer,confgen,timestamp,http,afamqp,mod-python,tags-parser,pseudofile,system-source,afsocket,afsnmp,csvparser,afstomp,appmodel,cryptofuncs,examples,afmongodb,add-contextual-data,afsmtp,afuser,xml,map-value-pairs,kvformat,redis,secure-logging,sdjournal,pacctformat
Enable-Debug: off
Enable-GProf: off
Enable-Memtrace: off
Enable-IPv6: on
Enable-Spoof-Source: on
Enable-TCP-Wrapper: on
Enable-Linux-Caps: on
Enable-Systemd: on
有没有办法同时做这两个elasticsearch索引?
您可以在日志日志中查看确切的错误消息,正如 systemctl 所建议的那样:
See "systemctl status syslog-ng.service" and "journalctl -xe" for details.
或者,您可以在前台启动 syslog-ng:
$ syslog-ng -F --stderr
由于匹配的 elasticsearch-http()
URL,您可能遇到了持久名称冲突。请尝试添加具有 2 个唯一名称的 persist-name()
选项,例如:
destination d_es {
elasticsearch-http(
index("syslog-ng-${YEAR}-${MONTH}-${DAY}")
url("https://192.168.89.44:9200/_bulk" "https://192.168.89.144:9200/_bulk")
# ...
persist-name("d_es")
);
};
destination d_es_other_index {
elasticsearch-http(
index("otherindex-${YEAR}-${MONTH}-${DAY}")
url("https://192.168.89.44:9200/_bulk" "https://192.168.89.144:9200/_bulk")
# ...
persist-name("d_es_other_index")
);
};
我试图将相同的日志流发送到两个不同的 elasticsearch 索引,因为每个索引的用户角色不同。
我也使用一个文件作为目标。这是一个示例:
2021-02-12T14:00:00+01:00 192.168.89.222 <30>1 2021-02-12T14:00:01+01:00 sonda filebeat 474 - - 2021-02-12T14:00:01.364+0100 DEBUG [input] input/input.go:152 Run input
2021-02-12T14:00:00+01:00 192.168.89.222 <30>1 2021-02-12T14:00:01+01:00 sonda filebeat 474 - - 2021-02-12T14:00:01.364+0100 DEBUG [input] log/input.go:191 Start next scan
2021-02-12T14:00:00+01:00 192.168.89.222 <30>1 2021-02-12T14:00:01+01:00 sensor filebeat 474 - - 2021-02-12T14:00:01.364+0100 DEBUG [input] log/input.go:421 Check file for harvesting: /opt/zeek/logs/current/weird.log
当我使用 elasticsearch-http 只有一个目的地时,两个配置中的任何一个,一切正常,但是当使用 两个目的地时, syslog-ng 无法启动并且 systemcl 抱怨。
这是我的 /etc/syslog-ng/syslog-ng.conf
文件:
@version: 3.27
@include "scl.conf"
options { chain_hostnames(off); flush_lines(0); use_dns(no); use_fqdn(no);
dns_cache(no); owner("root"); group("adm"); perm(0640);
stats_freq(0); bad_hostname("^gconfd$");
};
source s_net {
udp(
ip(0.0.0.0)
port(514)
flags(no-parse)
);
};
log {
source(s_net);
destination(d_es);
destination(d_es_other_index); ######## comment this to avoid the error
destination(d_file);
};
template t_demo_filetemplate {
template("${ISODATE} ${HOST} ${MESSAGE}\n");
};
destination d_file {
file("/tmp/test.log" template(t_demo_filetemplate));
};
destination d_es{
elasticsearch-http(
index("syslog-ng-${YEAR}-${MONTH}-${DAY}")
url("https://192.168.89.44:9200/_bulk" "https://192.168.89.144:9200/_bulk")
type("")
user("elastic")
password("password")
batch_lines(128)
batch_timeout(10000)
timeout(100)
template("$(format-json --scope rfc5424 --scope nv-pairs --exclude DATE --key ISODATE)")
time-zone("UTC")
tls(
ca-file("/root/elastic_certs/elastic-ca.crt")
cert-file("/root/elastic_certs/elastic.crt")
key-file("/root/elastic_certs/elastic.key")
peer-verify(no)
)
);
};
destination d_es_other_index{
elasticsearch-http(
index("otherindex-${YEAR}-${MONTH}-${DAY}")
url("https://192.168.89.44:9200/_bulk" "https://192.168.89.144:9200/_bulk")
type("")
user("elastic")
password("password")
batch_lines(128)
batch_timeout(10000)
timeout(100)
template("$(format-json --scope rfc5424 --scope nv-pairs --exclude DATE --key ISODATE)")
time-zone("UTC")
tls(
ca-file("/root/elastic_certs/elastic-ca.crt")
cert-file("/root/elastic_certs/elastic.crt")
key-file("/root/elastic_certs/elastic.key")
peer-verify(no)
)
);
};
我在使用两个 elasticsearch 目标时得到的错误(journalctl -xe 似乎没有显示相关信息):
# systemctl restart syslog-ng.service
Job for syslog-ng.service failed because the control process exited with error code.
See "systemctl status syslog-ng.service" and "journalctl -xe" for details.
还有我的 syslog-ng 信息:
$ syslog-ng --version
syslog-ng 3 (3.27.1)
Config version: 3.22
Installer-Version: 3.27.1
Revision: 3.27.1-3build1
Compile-Date: Jul 30 2020 17:56:17
Module-Directory: /usr/lib/syslog-ng/3.27
Module-Path: /usr/lib/syslog-ng/3.27
Include-Path: /usr/share/syslog-ng/include
Available-Modules: syslogformat,afsql,linux-kmsg-format,stardate,affile,dbparser,geoip2-plugin,afprog,kafka,graphite,riemann,tfgetent,json-plugin,cef,hook-commands,basicfuncs,disk-buffer,confgen,timestamp,http,afamqp,mod-python,tags-parser,pseudofile,system-source,afsocket,afsnmp,csvparser,afstomp,appmodel,cryptofuncs,examples,afmongodb,add-contextual-data,afsmtp,afuser,xml,map-value-pairs,kvformat,redis,secure-logging,sdjournal,pacctformat
Enable-Debug: off
Enable-GProf: off
Enable-Memtrace: off
Enable-IPv6: on
Enable-Spoof-Source: on
Enable-TCP-Wrapper: on
Enable-Linux-Caps: on
Enable-Systemd: on
有没有办法同时做这两个elasticsearch索引?
您可以在日志日志中查看确切的错误消息,正如 systemctl 所建议的那样:
See "systemctl status syslog-ng.service" and "journalctl -xe" for details.
或者,您可以在前台启动 syslog-ng:
$ syslog-ng -F --stderr
由于匹配的 elasticsearch-http()
URL,您可能遇到了持久名称冲突。请尝试添加具有 2 个唯一名称的 persist-name()
选项,例如:
destination d_es {
elasticsearch-http(
index("syslog-ng-${YEAR}-${MONTH}-${DAY}")
url("https://192.168.89.44:9200/_bulk" "https://192.168.89.144:9200/_bulk")
# ...
persist-name("d_es")
);
};
destination d_es_other_index {
elasticsearch-http(
index("otherindex-${YEAR}-${MONTH}-${DAY}")
url("https://192.168.89.44:9200/_bulk" "https://192.168.89.144:9200/_bulk")
# ...
persist-name("d_es_other_index")
);
};