如何在 kubernetes 的单个 filebeat DaemonSet 中声明多个 output.logstash?
How to declare multiple output.logstash in single filebeat DaemonSet in kubernetes?
我在 Kubernetes 集群上有 2 个应用程序(Application1、Application2)运行。我想从 Kubernetes 集群外部收集我的应用程序的日志并将它们保存在不同的目录中(例如:/var/log/application1/application1-YYYYMMDD.log 和 /var/log/application2/application2-YYYYMMDD.log)。
因此,我在 Kubernetes 集群上部署了一个 filebeat DaemonSet,以从我的应用程序(Application1、Application2)和 运行 我要保存日志文件的实例上的 logstash 服务中获取日志(在Kubernetes 集群)。
我在 configMap 中创建了 2 个 filebeat.yml(filebeat-application1.yml 和 filebeat-application2.yml)文件,然后将这两个文件作为 args 提供给 DaemonSet(docker.elastic。co/beats/filebeat:7.10.1) 如下。
....
- name: filebeat-application1
image: docker.elastic.co/beats/filebeat:7.10.1
args: [
"-c", "/etc/filebeat-application1.yml",
"-c", "/etc/filebeat-application2.yml",
"-e",
]
.....
但只有 /etc/filebeat-application2.yml 受到影响。因此,我只从 application2 获取日志。
你能帮我看看如何将两个 filebeat 配置文件输入 docker.elastic.co/beats/filebeat DaemonSet 吗?或者如何配置两个“filebeat.autodiscovery:”规则和 2 个单独的“output.logstash:”?
下面是我的完整文件beat-kubernetes-whatsapp.yaml
---
apiVersion: v1
kind: Namespace
metadata:
name: logging
---
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-config-application1
namespace: logging
labels:
k8s-app: filebeat
data:
filebeat-application1.yml: |-
# To enable hints based autodiscover, remove `filebeat.inputs` configuration and uncomment this:
filebeat.autodiscover:
providers:
- type: kubernetes
node: ${NODE_NAME}
templates:
- condition:
equals:
kubernetes.namespace: default
- condition:
contains:
kubernetes.pod.name: "application1"
config:
- type: container
paths:
- /var/log/containers/*${data.kubernetes.pod.name}*.log
processors:
- add_locale:
format: offset
- add_kubernetes_metadata:
output.logstash:
hosts: ["IP:5045"]
---
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-config-application2
namespace: logging
labels:
k8s-app: filebeat
data:
filebeat-application2.yml: |-
# To enable hints based autodiscover, remove `filebeat.inputs` configuration and uncomment this:
filebeat.autodiscover:
providers:
- type: kubernetes
node: ${NODE_NAME}
templates:
- condition:
equals:
kubernetes.namespace: default
- condition:
contains:
kubernetes.pod.name: "application2"
config:
- type: container
paths:
- /var/log/containers/*${data.kubernetes.pod.name}*.log
processors:
- add_locale:
format: offset
- add_kubernetes_metadata:
output.logstash:
hosts: ["IP:5044"]
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: filebeat
namespace: logging
labels:
k8s-app: filebeat
spec:
selector:
matchLabels:
k8s-app: filebeat
template:
metadata:
labels:
k8s-app: filebeat
spec:
serviceAccountName: filebeat
terminationGracePeriodSeconds: 30
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: filebeat-application1
image: docker.elastic.co/beats/filebeat:7.10.1
args: [
"-c", "/etc/filebeat-application1.yml",
"-c", "/etc/filebeat-application2.yml",
"-e",
]
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
securityContext:
runAsUser: 0
# If using Red Hat OpenShift uncomment this:
#privileged: true
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
volumeMounts:
- name: config-application1
mountPath: /etc/filebeat-application1.yml
readOnly: true
subPath: filebeat-application1.yml
- name: config-application2
mountPath: /etc/filebeat-application2.yml
readOnly: true
subPath: filebeat-application2.yml
- name: data
mountPath: /usr/share/filebeat/data
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
- name: varlog
mountPath: /var/log
volumes:
- name: config-application1
configMap:
defaultMode: 0640
name: filebeat-config-application1
- name: config-application2
configMap:
defaultMode: 0640
name: filebeat-config-application2
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: varlog
hostPath:
path: /var/log
# data folder stores a registry of read status for all files, so we don't send everything again on a Filebeat pod restart
- name: data
hostPath:
# When filebeat runs as non-root user, this directory needs to be writable by group (g+w).
path: /var/lib/filebeat-data
type: DirectoryOrCreate
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: filebeat
subjects:
- kind: ServiceAccount
name: filebeat
namespace: logging
roleRef:
kind: ClusterRole
name: filebeat
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: filebeat
labels:
k8s-app: filebeat
rules:
- apiGroups: [""] # "" indicates the core API group
resources:
- namespaces
- pods
verbs:
- get
- watch
- list
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: filebeat
namespace: logging
labels:
k8s-app: filebeat
---
不可能,filebeat只支持一个输出。
Only a single output may be defined.
您需要将日志发送到同一个 logstash 实例并根据某些字段过滤输出。
例如,假设您在发送到 logstash 的事件中有字段 kubernetes.pod.name
,您可以使用类似这样的东西。
output {
if [kubernetes][pod][name] == "application1" {
your output for the application1 log
}
if [kubernetes][pod][name] == "application2" {
your output for the application2 log
}
}
我找到了解决问题的方法。可能不是正确的方法,但可以满足我的要求。
filebeat-kubernetes-whatsapp.yaml
---
apiVersion: v1
kind: Namespace
metadata:
name: logging
---
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-config
namespace: logging
labels:
k8s-app: filebeat
data:
filebeat.yml: |-
# To enable hints based autodiscover, remove `filebeat.inputs` configuration and uncomment this:
filebeat.autodiscover:
providers:
- type: kubernetes
node: ${NODE_NAME}
templates:
- condition:
equals:
kubernetes.namespace: default
- condition:
contains:
kubernetes.pod.name: "application1"
config:
- type: container
paths:
- /var/log/containers/*${data.kubernetes.container.id}*.log
- condition:
contains:
kubernetes.pod.name: "application2"
config:
- type: container
paths:
- /var/log/containers/*${data.kubernetes.container.id}*.log
processors:
- add_locale:
format: offset
- add_kubernetes_metadata:
host: ${NODE_NAME}
matchers:
- logs_path:
logs_path: "/var/log/containers/"
output.logstash:
hosts: ["IP:5044"]
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: filebeat
namespace: logging
labels:
k8s-app: filebeat
spec:
selector:
matchLabels:
k8s-app: filebeat
template:
metadata:
labels:
k8s-app: filebeat
spec:
serviceAccountName: filebeat
terminationGracePeriodSeconds: 30
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: filebeat
image: docker.elastic.co/beats/filebeat:7.10.1
args: [
"-c", "/etc/filebeat.yml",
"-e",
]
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
securityContext:
runAsUser: 0
# If using Red Hat OpenShift uncomment this:
#privileged: true
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
volumeMounts:
- name: config
mountPath: /etc/filebeat.yml
readOnly: true
subPath: filebeat.yml
- name: data
mountPath: /usr/share/filebeat/data
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
- name: varlog
mountPath: /var/log
readOnly: true
volumes:
- name: config
configMap:
defaultMode: 0640
name: filebeat-config
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: varlog
hostPath:
path: /var/log
# data folder stores a registry of read status for all files, so we don't send everything again on a Filebeat pod restart
- name: data
hostPath:
# When filebeat runs as non-root user, this directory needs to be writable by group (g+w).
path: /var/lib/filebeat-data
type: DirectoryOrCreate
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: filebeat
subjects:
- kind: ServiceAccount
name: filebeat
namespace: logging
roleRef:
kind: ClusterRole
name: filebeat
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: filebeat
labels:
k8s-app: filebeat
rules:
- apiGroups: [""] # "" indicates the core API group
resources:
- namespaces
- pods
verbs:
- get
- watch
- list
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: filebeat
namespace: logging
labels:
k8s-app: filebeat
---
/etc/logstash/conf.d/config.conf
input {
beats {
port => 5044
}
}
#filter {
# ...
#}
output {
if "application1" in [kubernetes][pod][name] {
file {
enable_metric => false
gzip => false
codec => line { format => "[%{[@timestamp]}] [%{[kubernetes][node][name]}/%{[kubernetes][pod][name]}/%{[kubernetes][pod][uid]}] [%{message}]"}
path => "/abc/def/logs/application1%{+YYYY-MM-dd}.log"
}
}
if "application2" in [kubernetes][pod][name] {
file {
enable_metric => false
gzip => false
codec => line { format => "[%{[@timestamp]}] [%{[kubernetes][node][name]}/%{[kubernetes][pod][name]}/%{[kubernetes][pod][uid]}] [%{message}]"}
path => "/abc/def/logs/application2%{+YYYY-MM-dd}.log"
}
}
}
我在 Kubernetes 集群上有 2 个应用程序(Application1、Application2)运行。我想从 Kubernetes 集群外部收集我的应用程序的日志并将它们保存在不同的目录中(例如:/var/log/application1/application1-YYYYMMDD.log 和 /var/log/application2/application2-YYYYMMDD.log)。
因此,我在 Kubernetes 集群上部署了一个 filebeat DaemonSet,以从我的应用程序(Application1、Application2)和 运行 我要保存日志文件的实例上的 logstash 服务中获取日志(在Kubernetes 集群)。
我在 configMap 中创建了 2 个 filebeat.yml(filebeat-application1.yml 和 filebeat-application2.yml)文件,然后将这两个文件作为 args 提供给 DaemonSet(docker.elastic。co/beats/filebeat:7.10.1) 如下。
....
- name: filebeat-application1
image: docker.elastic.co/beats/filebeat:7.10.1
args: [
"-c", "/etc/filebeat-application1.yml",
"-c", "/etc/filebeat-application2.yml",
"-e",
]
.....
但只有 /etc/filebeat-application2.yml 受到影响。因此,我只从 application2 获取日志。
你能帮我看看如何将两个 filebeat 配置文件输入 docker.elastic.co/beats/filebeat DaemonSet 吗?或者如何配置两个“filebeat.autodiscovery:”规则和 2 个单独的“output.logstash:”?
下面是我的完整文件beat-kubernetes-whatsapp.yaml
---
apiVersion: v1
kind: Namespace
metadata:
name: logging
---
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-config-application1
namespace: logging
labels:
k8s-app: filebeat
data:
filebeat-application1.yml: |-
# To enable hints based autodiscover, remove `filebeat.inputs` configuration and uncomment this:
filebeat.autodiscover:
providers:
- type: kubernetes
node: ${NODE_NAME}
templates:
- condition:
equals:
kubernetes.namespace: default
- condition:
contains:
kubernetes.pod.name: "application1"
config:
- type: container
paths:
- /var/log/containers/*${data.kubernetes.pod.name}*.log
processors:
- add_locale:
format: offset
- add_kubernetes_metadata:
output.logstash:
hosts: ["IP:5045"]
---
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-config-application2
namespace: logging
labels:
k8s-app: filebeat
data:
filebeat-application2.yml: |-
# To enable hints based autodiscover, remove `filebeat.inputs` configuration and uncomment this:
filebeat.autodiscover:
providers:
- type: kubernetes
node: ${NODE_NAME}
templates:
- condition:
equals:
kubernetes.namespace: default
- condition:
contains:
kubernetes.pod.name: "application2"
config:
- type: container
paths:
- /var/log/containers/*${data.kubernetes.pod.name}*.log
processors:
- add_locale:
format: offset
- add_kubernetes_metadata:
output.logstash:
hosts: ["IP:5044"]
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: filebeat
namespace: logging
labels:
k8s-app: filebeat
spec:
selector:
matchLabels:
k8s-app: filebeat
template:
metadata:
labels:
k8s-app: filebeat
spec:
serviceAccountName: filebeat
terminationGracePeriodSeconds: 30
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: filebeat-application1
image: docker.elastic.co/beats/filebeat:7.10.1
args: [
"-c", "/etc/filebeat-application1.yml",
"-c", "/etc/filebeat-application2.yml",
"-e",
]
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
securityContext:
runAsUser: 0
# If using Red Hat OpenShift uncomment this:
#privileged: true
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
volumeMounts:
- name: config-application1
mountPath: /etc/filebeat-application1.yml
readOnly: true
subPath: filebeat-application1.yml
- name: config-application2
mountPath: /etc/filebeat-application2.yml
readOnly: true
subPath: filebeat-application2.yml
- name: data
mountPath: /usr/share/filebeat/data
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
- name: varlog
mountPath: /var/log
volumes:
- name: config-application1
configMap:
defaultMode: 0640
name: filebeat-config-application1
- name: config-application2
configMap:
defaultMode: 0640
name: filebeat-config-application2
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: varlog
hostPath:
path: /var/log
# data folder stores a registry of read status for all files, so we don't send everything again on a Filebeat pod restart
- name: data
hostPath:
# When filebeat runs as non-root user, this directory needs to be writable by group (g+w).
path: /var/lib/filebeat-data
type: DirectoryOrCreate
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: filebeat
subjects:
- kind: ServiceAccount
name: filebeat
namespace: logging
roleRef:
kind: ClusterRole
name: filebeat
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: filebeat
labels:
k8s-app: filebeat
rules:
- apiGroups: [""] # "" indicates the core API group
resources:
- namespaces
- pods
verbs:
- get
- watch
- list
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: filebeat
namespace: logging
labels:
k8s-app: filebeat
---
不可能,filebeat只支持一个输出。
Only a single output may be defined.
您需要将日志发送到同一个 logstash 实例并根据某些字段过滤输出。
例如,假设您在发送到 logstash 的事件中有字段 kubernetes.pod.name
,您可以使用类似这样的东西。
output {
if [kubernetes][pod][name] == "application1" {
your output for the application1 log
}
if [kubernetes][pod][name] == "application2" {
your output for the application2 log
}
}
我找到了解决问题的方法。可能不是正确的方法,但可以满足我的要求。
filebeat-kubernetes-whatsapp.yaml
---
apiVersion: v1
kind: Namespace
metadata:
name: logging
---
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-config
namespace: logging
labels:
k8s-app: filebeat
data:
filebeat.yml: |-
# To enable hints based autodiscover, remove `filebeat.inputs` configuration and uncomment this:
filebeat.autodiscover:
providers:
- type: kubernetes
node: ${NODE_NAME}
templates:
- condition:
equals:
kubernetes.namespace: default
- condition:
contains:
kubernetes.pod.name: "application1"
config:
- type: container
paths:
- /var/log/containers/*${data.kubernetes.container.id}*.log
- condition:
contains:
kubernetes.pod.name: "application2"
config:
- type: container
paths:
- /var/log/containers/*${data.kubernetes.container.id}*.log
processors:
- add_locale:
format: offset
- add_kubernetes_metadata:
host: ${NODE_NAME}
matchers:
- logs_path:
logs_path: "/var/log/containers/"
output.logstash:
hosts: ["IP:5044"]
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: filebeat
namespace: logging
labels:
k8s-app: filebeat
spec:
selector:
matchLabels:
k8s-app: filebeat
template:
metadata:
labels:
k8s-app: filebeat
spec:
serviceAccountName: filebeat
terminationGracePeriodSeconds: 30
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: filebeat
image: docker.elastic.co/beats/filebeat:7.10.1
args: [
"-c", "/etc/filebeat.yml",
"-e",
]
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
securityContext:
runAsUser: 0
# If using Red Hat OpenShift uncomment this:
#privileged: true
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
volumeMounts:
- name: config
mountPath: /etc/filebeat.yml
readOnly: true
subPath: filebeat.yml
- name: data
mountPath: /usr/share/filebeat/data
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
- name: varlog
mountPath: /var/log
readOnly: true
volumes:
- name: config
configMap:
defaultMode: 0640
name: filebeat-config
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: varlog
hostPath:
path: /var/log
# data folder stores a registry of read status for all files, so we don't send everything again on a Filebeat pod restart
- name: data
hostPath:
# When filebeat runs as non-root user, this directory needs to be writable by group (g+w).
path: /var/lib/filebeat-data
type: DirectoryOrCreate
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: filebeat
subjects:
- kind: ServiceAccount
name: filebeat
namespace: logging
roleRef:
kind: ClusterRole
name: filebeat
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: filebeat
labels:
k8s-app: filebeat
rules:
- apiGroups: [""] # "" indicates the core API group
resources:
- namespaces
- pods
verbs:
- get
- watch
- list
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: filebeat
namespace: logging
labels:
k8s-app: filebeat
---
/etc/logstash/conf.d/config.conf
input {
beats {
port => 5044
}
}
#filter {
# ...
#}
output {
if "application1" in [kubernetes][pod][name] {
file {
enable_metric => false
gzip => false
codec => line { format => "[%{[@timestamp]}] [%{[kubernetes][node][name]}/%{[kubernetes][pod][name]}/%{[kubernetes][pod][uid]}] [%{message}]"}
path => "/abc/def/logs/application1%{+YYYY-MM-dd}.log"
}
}
if "application2" in [kubernetes][pod][name] {
file {
enable_metric => false
gzip => false
codec => line { format => "[%{[@timestamp]}] [%{[kubernetes][node][name]}/%{[kubernetes][pod][name]}/%{[kubernetes][pod][uid]}] [%{message}]"}
path => "/abc/def/logs/application2%{+YYYY-MM-dd}.log"
}
}
}