Kubernetes daemonset 在 AWS EKS 1.21 中根本没有 运行
Kubernetes daemonset not running at all in AWS EKS 1.21
我正在使用 AWS EKS 1.21 和 Fargate(无服务器)。我正在尝试 运行 Fluentd 作为守护进程,但是守护进程根本不是 运行ning。
所有其他对象,如角色、角色绑定、serviceaccount、configmap 已在集群中就位。
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
aws-node 0 0 0 0 0 <none> 8d
fluentd-cloudwatch 0 0 0 0 0 <none> 3m36s
kube-proxy 0 0 0 0 0 <none> 8d
这是我的 Daemonset:-
apiVersion: apps/v1 #Latest support AWS EKS 1.21
kind: DaemonSet
metadata:
labels:
k8s-app: fluentd-cloudwatch
name: fluentd-cloudwatch
namespace: kube-system
spec:
selector:
matchLabels:
k8s-app: fluentd-cloudwatch
template:
metadata:
labels:
k8s-app: fluentd-cloudwatch
spec:
containers:
- env:
- name: REGION
value: us-east-1 # Correct AWS EKS region should be verified before running this Daemonset
- name: CLUSTER_NAME
value: eks-fargate-alb-demo # AWS EKS Cluster Name should be verified before running this Daemonset
image: fluent/fluentd-kubernetes-daemonset:v1.1-debian-cloudwatch
imagePullPolicy: IfNotPresent
name: fluentd-cloudwatch
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 200Mi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /config-volume
name: config-volume
- mountPath: /fluentd/etc
name: fluentdconf
- mountPath: /var/log
name: varlog
- mountPath: /var/lib/docker/containers
name: varlibdockercontainers
readOnly: true
- mountPath: /run/log/journal
name: runlogjournal
readOnly: true
dnsPolicy: ClusterFirst
initContainers:
- command:
- sh
- -c
- cp /config-volume/..data/* /fluentd/etc
image: busybox
imagePullPolicy: Always
name: copy-fluentd-config
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /config-volume
name: config-volume
- mountPath: /fluentd/etc
name: fluentdconf
serviceAccount: fluentd
serviceAccountName: fluentd
terminationGracePeriodSeconds: 30
volumes:
- configMap:
defaultMode: 420
name: fluentd-config
name: config-volume
- emptyDir: {}
name: fluentdconf
- hostPath:
path: /var/log
type: ""
name: varlog
- hostPath:
path: /var/lib/docker/containers
type: ""
name: varlibdockercontainers
- hostPath:
path: /run/log/journal
type: ""
name: runlogjournal
当我描述它时,我也没有看到任何事件。我可以在这个集群上 运行 其他 pods 像 Nginx 等,但这根本不是 运行ning。
kubectl describe ds fluentd-cloudwatch -n kube-system
Name: fluentd-cloudwatch
Selector: k8s-app=fluentd-cloudwatch
Node-Selector: <none>
Labels: k8s-app=fluentd-cloudwatch
Annotations: deprecated.daemonset.template.generation: 1
Desired Number of Nodes Scheduled: 0
Current Number of Nodes Scheduled: 0
Number of Nodes Scheduled with Up-to-date Pods: 0
Number of Nodes Scheduled with Available Pods: 0
Number of Nodes Misscheduled: 0
Pods Status: 0 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
Labels: k8s-app=fluentd-cloudwatch
Service Account: fluentd
Init Containers:
copy-fluentd-config:
Image: busybox
Port: <none>
Host Port: <none>
Command:
sh
-c
cp /config-volume/..data/* /fluentd/etc
Environment: <none>
Mounts:
/config-volume from config-volume (rw)
/fluentd/etc from fluentdconf (rw)
Containers:
fluentd-cloudwatch:
Image: fluent/fluentd-kubernetes-daemonset:v1.1-debian-cloudwatch
Port: <none>
Host Port: <none>
Limits:
memory: 200Mi
Requests:
cpu: 100m
memory: 200Mi
Environment:
REGION: us-east-1
CLUSTER_NAME: eks-fargate-alb-demo
Mounts:
/config-volume from config-volume (rw)
/fluentd/etc from fluentdconf (rw)
/run/log/journal from runlogjournal (ro)
/var/lib/docker/containers from varlibdockercontainers (ro)
/var/log from varlog (rw)
Volumes:
config-volume:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: fluentd-config
Optional: false
fluentdconf:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
varlog:
Type: HostPath (bare host directory volume)
Path: /var/log
HostPathType:
varlibdockercontainers:
Type: HostPath (bare host directory volume)
Path: /var/lib/docker/containers
HostPathType:
runlogjournal:
Type: HostPath (bare host directory volume)
Path: /run/log/journal
HostPathType:
Events: <none>
ConfigMap:-
apiVersion: v1
data:
containers.conf: |
<source>
@type tail
@id in_tail_container_logs
@label @containers
path /var/log/containers/*.log
pos_file /var/log/fluentd-containers.log.pos
tag *
read_from_head true
<parse>
@type json
time_format %Y-%m-%dT%H:%M:%S.%NZ
</parse>
</source>
<label @containers>
<filter **>
@type kubernetes_metadata
@id filter_kube_metadata
</filter>
<filter **>
@type record_transformer
@id filter_containers_stream_transformer
<record>
stream_name ${tag_parts[3]}
</record>
</filter>
<match **>
@type cloudwatch_logs
@id out_cloudwatch_logs_containers
region "#{ENV.fetch('REGION')}"
log_group_name "/k8s-nest/#{ENV.fetch('CLUSTER_NAME')}/containers"
log_stream_name_key stream_name
remove_log_stream_name_key true
auto_create_stream true
<buffer>
flush_interval 5
chunk_limit_size 2m
queued_chunks_limit_size 32
retry_forever true
</buffer>
</match>
</label>
fluent.conf: |
@include containers.conf
@include systemd.conf
<match fluent.**>
@type null
</match>
systemd.conf: |
<source>
@type systemd
@id in_systemd_kubelet
@label @systemd
filters [{ "_SYSTEMD_UNIT": "kubelet.service" }]
<entry>
field_map {"MESSAGE": "message", "_HOSTNAME": "hostname", "_SYSTEMD_UNIT": "systemd_unit"}
field_map_strict true
</entry>
path /run/log/journal
pos_file /var/log/fluentd-journald-kubelet.pos
read_from_head true
tag kubelet.service
</source>
<source>
@type systemd
@id in_systemd_kubeproxy
@label @systemd
filters [{ "_SYSTEMD_UNIT": "kubeproxy.service" }]
<entry>
field_map {"MESSAGE": "message", "_HOSTNAME": "hostname", "_SYSTEMD_UNIT": "systemd_unit"}
field_map_strict true
</entry>
path /run/log/journal
pos_file /var/log/fluentd-journald-kubeproxy.pos
read_from_head true
tag kubeproxy.service
</source>
<source>
@type systemd
@id in_systemd_docker
@label @systemd
filters [{ "_SYSTEMD_UNIT": "docker.service" }]
<entry>
field_map {"MESSAGE": "message", "_HOSTNAME": "hostname", "_SYSTEMD_UNIT": "systemd_unit"}
field_map_strict true
</entry>
path /run/log/journal
pos_file /var/log/fluentd-journald-docker.pos
read_from_head true
tag docker.service
</source>
<label @systemd>
<filter **>
@type record_transformer
@id filter_systemd_stream_transformer
<record>
stream_name ${tag}-${record["hostname"]}
</record>
</filter>
<match **>
@type cloudwatch_logs
@id out_cloudwatch_logs_systemd
region "#{ENV.fetch('REGION')}"
log_group_name "/k8s-nest/#{ENV.fetch('CLUSTER_NAME')}/systemd"
log_stream_name_key stream_name
auto_create_stream true
remove_log_stream_name_key true
<buffer>
flush_interval 5
chunk_limit_size 2m
queued_chunks_limit_size 32
retry_forever true
</buffer>
</match>
</label>
kind: ConfigMap
metadata:
labels:
k8s-app: fluentd-cloudwatch
name: fluentd-config
namespace: kube-system
请告诉我问题出在哪里,谢谢
经过研究,我发现 AWS 中的 Fargate 尚不支持 Kubernetes Daemon set 对象。现在剩下的选择:-
A) 运行 Fluentd 与 pod 中的其他容器一起作为 sidecar 模式
B) 将集群从 Fargate 更改为基于 NodeGroup
如您所想,EKS/Fargate 不支持 Daemonset(因为没有 [真实] 节点)。实际上,您不需要 运行 FluentBit 作为每个 pod 的 sidecar。 EKS/Fargate 支持称为 Firelens 的日志记录功能,它允许您仅配置要记录的位置(目的地),Fargate 将在后端配置一个“隐藏汽车”(用户不可见)来执行此操作。详情请见this page of the documentation。
片段:
Amazon EKS on Fargate offers a built-in log router based on Fluent Bit. This means that you don't explicitly run a Fluent Bit container as a sidecar, but Amazon runs it for you. All that you have to do is configure the log router. The configuration happens through a dedicated ConfigMap....
我正在使用 AWS EKS 1.21 和 Fargate(无服务器)。我正在尝试 运行 Fluentd 作为守护进程,但是守护进程根本不是 运行ning。
所有其他对象,如角色、角色绑定、serviceaccount、configmap 已在集群中就位。
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
aws-node 0 0 0 0 0 <none> 8d
fluentd-cloudwatch 0 0 0 0 0 <none> 3m36s
kube-proxy 0 0 0 0 0 <none> 8d
这是我的 Daemonset:-
apiVersion: apps/v1 #Latest support AWS EKS 1.21
kind: DaemonSet
metadata:
labels:
k8s-app: fluentd-cloudwatch
name: fluentd-cloudwatch
namespace: kube-system
spec:
selector:
matchLabels:
k8s-app: fluentd-cloudwatch
template:
metadata:
labels:
k8s-app: fluentd-cloudwatch
spec:
containers:
- env:
- name: REGION
value: us-east-1 # Correct AWS EKS region should be verified before running this Daemonset
- name: CLUSTER_NAME
value: eks-fargate-alb-demo # AWS EKS Cluster Name should be verified before running this Daemonset
image: fluent/fluentd-kubernetes-daemonset:v1.1-debian-cloudwatch
imagePullPolicy: IfNotPresent
name: fluentd-cloudwatch
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 200Mi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /config-volume
name: config-volume
- mountPath: /fluentd/etc
name: fluentdconf
- mountPath: /var/log
name: varlog
- mountPath: /var/lib/docker/containers
name: varlibdockercontainers
readOnly: true
- mountPath: /run/log/journal
name: runlogjournal
readOnly: true
dnsPolicy: ClusterFirst
initContainers:
- command:
- sh
- -c
- cp /config-volume/..data/* /fluentd/etc
image: busybox
imagePullPolicy: Always
name: copy-fluentd-config
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /config-volume
name: config-volume
- mountPath: /fluentd/etc
name: fluentdconf
serviceAccount: fluentd
serviceAccountName: fluentd
terminationGracePeriodSeconds: 30
volumes:
- configMap:
defaultMode: 420
name: fluentd-config
name: config-volume
- emptyDir: {}
name: fluentdconf
- hostPath:
path: /var/log
type: ""
name: varlog
- hostPath:
path: /var/lib/docker/containers
type: ""
name: varlibdockercontainers
- hostPath:
path: /run/log/journal
type: ""
name: runlogjournal
当我描述它时,我也没有看到任何事件。我可以在这个集群上 运行 其他 pods 像 Nginx 等,但这根本不是 运行ning。
kubectl describe ds fluentd-cloudwatch -n kube-system
Name: fluentd-cloudwatch
Selector: k8s-app=fluentd-cloudwatch
Node-Selector: <none>
Labels: k8s-app=fluentd-cloudwatch
Annotations: deprecated.daemonset.template.generation: 1
Desired Number of Nodes Scheduled: 0
Current Number of Nodes Scheduled: 0
Number of Nodes Scheduled with Up-to-date Pods: 0
Number of Nodes Scheduled with Available Pods: 0
Number of Nodes Misscheduled: 0
Pods Status: 0 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
Labels: k8s-app=fluentd-cloudwatch
Service Account: fluentd
Init Containers:
copy-fluentd-config:
Image: busybox
Port: <none>
Host Port: <none>
Command:
sh
-c
cp /config-volume/..data/* /fluentd/etc
Environment: <none>
Mounts:
/config-volume from config-volume (rw)
/fluentd/etc from fluentdconf (rw)
Containers:
fluentd-cloudwatch:
Image: fluent/fluentd-kubernetes-daemonset:v1.1-debian-cloudwatch
Port: <none>
Host Port: <none>
Limits:
memory: 200Mi
Requests:
cpu: 100m
memory: 200Mi
Environment:
REGION: us-east-1
CLUSTER_NAME: eks-fargate-alb-demo
Mounts:
/config-volume from config-volume (rw)
/fluentd/etc from fluentdconf (rw)
/run/log/journal from runlogjournal (ro)
/var/lib/docker/containers from varlibdockercontainers (ro)
/var/log from varlog (rw)
Volumes:
config-volume:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: fluentd-config
Optional: false
fluentdconf:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
varlog:
Type: HostPath (bare host directory volume)
Path: /var/log
HostPathType:
varlibdockercontainers:
Type: HostPath (bare host directory volume)
Path: /var/lib/docker/containers
HostPathType:
runlogjournal:
Type: HostPath (bare host directory volume)
Path: /run/log/journal
HostPathType:
Events: <none>
ConfigMap:-
apiVersion: v1
data:
containers.conf: |
<source>
@type tail
@id in_tail_container_logs
@label @containers
path /var/log/containers/*.log
pos_file /var/log/fluentd-containers.log.pos
tag *
read_from_head true
<parse>
@type json
time_format %Y-%m-%dT%H:%M:%S.%NZ
</parse>
</source>
<label @containers>
<filter **>
@type kubernetes_metadata
@id filter_kube_metadata
</filter>
<filter **>
@type record_transformer
@id filter_containers_stream_transformer
<record>
stream_name ${tag_parts[3]}
</record>
</filter>
<match **>
@type cloudwatch_logs
@id out_cloudwatch_logs_containers
region "#{ENV.fetch('REGION')}"
log_group_name "/k8s-nest/#{ENV.fetch('CLUSTER_NAME')}/containers"
log_stream_name_key stream_name
remove_log_stream_name_key true
auto_create_stream true
<buffer>
flush_interval 5
chunk_limit_size 2m
queued_chunks_limit_size 32
retry_forever true
</buffer>
</match>
</label>
fluent.conf: |
@include containers.conf
@include systemd.conf
<match fluent.**>
@type null
</match>
systemd.conf: |
<source>
@type systemd
@id in_systemd_kubelet
@label @systemd
filters [{ "_SYSTEMD_UNIT": "kubelet.service" }]
<entry>
field_map {"MESSAGE": "message", "_HOSTNAME": "hostname", "_SYSTEMD_UNIT": "systemd_unit"}
field_map_strict true
</entry>
path /run/log/journal
pos_file /var/log/fluentd-journald-kubelet.pos
read_from_head true
tag kubelet.service
</source>
<source>
@type systemd
@id in_systemd_kubeproxy
@label @systemd
filters [{ "_SYSTEMD_UNIT": "kubeproxy.service" }]
<entry>
field_map {"MESSAGE": "message", "_HOSTNAME": "hostname", "_SYSTEMD_UNIT": "systemd_unit"}
field_map_strict true
</entry>
path /run/log/journal
pos_file /var/log/fluentd-journald-kubeproxy.pos
read_from_head true
tag kubeproxy.service
</source>
<source>
@type systemd
@id in_systemd_docker
@label @systemd
filters [{ "_SYSTEMD_UNIT": "docker.service" }]
<entry>
field_map {"MESSAGE": "message", "_HOSTNAME": "hostname", "_SYSTEMD_UNIT": "systemd_unit"}
field_map_strict true
</entry>
path /run/log/journal
pos_file /var/log/fluentd-journald-docker.pos
read_from_head true
tag docker.service
</source>
<label @systemd>
<filter **>
@type record_transformer
@id filter_systemd_stream_transformer
<record>
stream_name ${tag}-${record["hostname"]}
</record>
</filter>
<match **>
@type cloudwatch_logs
@id out_cloudwatch_logs_systemd
region "#{ENV.fetch('REGION')}"
log_group_name "/k8s-nest/#{ENV.fetch('CLUSTER_NAME')}/systemd"
log_stream_name_key stream_name
auto_create_stream true
remove_log_stream_name_key true
<buffer>
flush_interval 5
chunk_limit_size 2m
queued_chunks_limit_size 32
retry_forever true
</buffer>
</match>
</label>
kind: ConfigMap
metadata:
labels:
k8s-app: fluentd-cloudwatch
name: fluentd-config
namespace: kube-system
请告诉我问题出在哪里,谢谢
经过研究,我发现 AWS 中的 Fargate 尚不支持 Kubernetes Daemon set 对象。现在剩下的选择:- A) 运行 Fluentd 与 pod 中的其他容器一起作为 sidecar 模式 B) 将集群从 Fargate 更改为基于 NodeGroup
如您所想,EKS/Fargate 不支持 Daemonset(因为没有 [真实] 节点)。实际上,您不需要 运行 FluentBit 作为每个 pod 的 sidecar。 EKS/Fargate 支持称为 Firelens 的日志记录功能,它允许您仅配置要记录的位置(目的地),Fargate 将在后端配置一个“隐藏汽车”(用户不可见)来执行此操作。详情请见this page of the documentation。
片段:
Amazon EKS on Fargate offers a built-in log router based on Fluent Bit. This means that you don't explicitly run a Fluent Bit container as a sidecar, but Amazon runs it for you. All that you have to do is configure the log router. The configuration happens through a dedicated ConfigMap....