Fluentd Kubernetes Nodejs : Error: connect ECONNREFUSED 127.0.0.1:24224

Fluentd Kubernetes Nodejs : Error: connect ECONNREFUSED 127.0.0.1:24224

编辑:我直接在我的 Express 应用程序及其工作中对流利的服务 IP 进行了硬编码。如何在不对 ip 进行硬编码的情况下使其工作?

我在 Kubernetes 集群上有几个 pods (nodejs + express server) 运行。

我想将日志从我的 nodejs pods 发送到 Fluentd DeamonSet

但是我收到了这个错误:

Fluentd error Error: connect ECONNREFUSED 127.0.0.1:24224

我正在使用 https://github.com/fluent/fluent-logger-node,我的配置非常简单:

const logger = require('fluent-logger')

logger.configure('pptr', {
   host: 'localhost',
   port: 24224,
   timeout: 3.0,
   reconnectInterval: 600000
});

我的 fluentd 配置文件:

<source>
  @type forward
  port 24224
  bind 0.0.0.0
</source>

# Ignore fluent logs
<label @FLUENT_LOG>
  <match fluent.*>
    @type null
  </match>
</label>

<match pptr.**>
  @type elasticsearch
  host "#{ENV['FLUENT_ELASTICSEARCH_HOST']}"
  port "#{ENV['FLUENT_ELASTICSEARCH_PORT']}"
  scheme "#{ENV['FLUENT_ELASTICSEARCH_SCHEME'] || 'http'}"
  ssl_verify "#{ENV['FLUENT_ELASTICSEARCH_SSL_VERIFY'] || 'true'}"
  user "#{ENV['FLUENT_ELASTICSEARCH_USER']}"
  password "#{ENV['FLUENT_ELASTICSEARCH_PASSWORD']}"
  reload_connections "#{ENV['FLUENT_ELASTICSEARCH_RELOAD_CONNECTIONS'] || 'true'}"
  type_name fluentd
  logstash_format true
</match>

这是 Fluentd DeamonSet 配置文件:

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: fluentd
  namespace: kube-system
  labels:
    k8s-app: fluentd-logging
    version: v1
spec:
  selector:
    matchLabels:
      k8s-app: fluentd-logging
      version: v1
  template:
    metadata:
      labels:
        k8s-app: fluentd-logging
        version: v1
    spec:
      serviceAccount: fluentd
      serviceAccountName: fluentd
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
      containers:
        - name: fluentd
          image: fluent/fluentd-kubernetes-daemonset:v1-debian-elasticsearch
          ports:
            - containerPort: 24224
          env:
            - name:  FLUENT_ELASTICSEARCH_HOST
              value: "xxx"
            - name:  FLUENT_ELASTICSEARCH_PORT
              value: "xxx"
            - name: FLUENT_ELASTICSEARCH_SCHEME
              value: "https"
            # Option to configure elasticsearch plugin with self signed certs
            # ================================================================
            - name: FLUENT_ELASTICSEARCH_SSL_VERIFY
              value: "true"
            # Option to configure elasticsearch plugin with tls
            # ================================================================
            - name: FLUENT_ELASTICSEARCH_SSL_VERSION
              value: "TLSv1_2"
            # X-Pack Authentication
            # =====================
            - name: FLUENT_ELASTICSEARCH_USER
              value: "xxx"
            - name: FLUENT_ELASTICSEARCH_PASSWORD
              value: "xxx"
          resources:
            limits:
              memory: 200Mi
            requests:
              cpu: 100m
              memory: 200Mi
          volumeMounts:
            - name: config-volume
              mountPath: /fluentd/etc/kubernetes.conf
              subPath: kubernetes.conf
            - name: varlog
              mountPath: /var/log
            - name: varlibdockercontainers
              mountPath: /var/lib/docker/containers
              readOnly: true
      terminationGracePeriodSeconds: 30
      volumes:
        - name: config-volume
          configMap:
            name: fluentd-conf
        - name: varlog
          hostPath:
            path: /var/log
        - name: varlibdockercontainers
          hostPath:
            path: /var/lib/docker/containers

我还尝试部署服务并公开 24224 端口:

apiVersion: v1
kind: Service
metadata:
  name: fluentd
  namespace: kube-system
  labels:
    app: fluentd
spec:
  ports:
    - name: "24224"
      port: 24224
      targetPort: 24224
  selector:
    k8s-app: fluentd-logging
status:
  loadBalancer: {}

我的 Express 应用(部署)终于来了:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: puppet
  labels:
    app: puppet
spec:
  replicas: 5
  selector:
    matchLabels:
      app: puppet
  template:
    metadata:
      labels:
        app: puppet
    spec:
      containers:
        - name: puppet
          image: myrepo/my-image
          ports:
            - containerPort: 8080

编辑:我直接在我的 Express 应用程序及其工作中对流利的服务 IP 进行了硬编码。如何在不对 ip 进行硬编码的情况下使其工作?

关注问题的以下部分:

I'd like send logs from my nodejs pods to a Fluentd DeamonSet.

EDIT : I harcoded the fluentd service IP directly in my express app and its working.. how to get it work without harcoding ip?

看起来 pods 和 fluentd 服务之间的通信是正确的(硬编码 IP 有效)。这里的问题是他们相互交流的方式。

您可以通过其名称与服务 fluentd 通信。例如(从 pod 内部):

  • curl fluentd:24224

您只能在同一个命名空间中通过其名称(如 fluentd)与服务通信。如果服务在另一个命名空间中,您需要使用它的完整的 DNS 名称。它的模板和示例如下:

  • 模板:service-name.namespace.svc.cluster.local
  • 示例:fluentd.kube-system.svc.cluster.local

您还可以使用 ExternalName 类型的服务将您的服务的完整 DNS 名称映射到更短的版本,如下所示:


假设(示例):

  • 您已经创建了一个 nginx-namespace 命名空间:
    • $ kubectl create namespace nginx-namespace
  • 您在 nginx-namespace 中有一个 nginx Deployment 和一个与之关联的服务:
    • $ kubectl create deployment nginx --image=nginx --namespace=nginx-namespace
    • $ kubectl expose deployment nginx --port=80 --type=ClusterIP --namespace=nginx-namespace
  • 您想从另一个命名空间(即 defaultnginx Deployment 进行通信

您可以选择与上述 pod 通信:

  • 通过Pod的IP地址
    • 10.98.132.201
  • 通过(完整的)DNS 服务名称
    • nginx.nginx-namespace.svc.cluster.local
  • 通过指向(完整)DNS 服务名称的 ExternalName 类型的服务
    • nginx-service

ExternalName服务类型示例:

apiVersion: v1
kind: Service
metadata:
  name: nginx-service
  namespace: default # <- the same as the pod communicating with the service
spec:
  type: ExternalName
  externalName: nginx.nginx-namespace.svc.cluster.local

您可以通过以下任一方式将此信息传递给广告连播:


其他资源: