运行 docker 上有 2 个独立的 ELK 堆栈

Running 2 separate ELK stacks on docker

在我的 AWS EC2 linux 服务器上,我正在 运行 设置一个 ELK 堆栈,其中 logstash 正在转换 postgress 数据库并导入 Elasticsearch。此设置当前用于我的开发环境。我们已经到了创建暂存环境的地步,因此我们可能还需要一个单独的 ELK 堆栈用于暂存,因为我们不想混合来自 2 个独立数据库(开发和暂存)的数据。

我对 ELK 的经验很少,我检查了一些选项但没有找到解决这个问题的方法。

我尝试的是创建另一个 docker-compose 具有不同容器名称和端口的文件。当我 运行 docker-compose.elastic.dev.yml 它通常会创建第一个 ELK 堆栈。然后我 运行 docker-compose.elastic.stage.yml 但它开始 recreate 现有的 ELK 容器。我尝试使用 docker-compose 设置,但到目前为止运气不好。有什么建议吗?

仅供参考,kibana 未包含在开发中,因为我们在那里不需要它。

docker-compose.elastic.stage.yml

    version: '3.7'
services:
  elasticsearch-stage:
    container_name: elasticsearch-stage
    image: docker.elastic.co/elasticsearch/elasticsearch:7.10.2
    ports:
      - 9400:9200
    environment:
      - http.cors.enabled=true
      - http.cors.allow-origin=*
      - http.cors.allow-methods=OPTIONS,HEAD,GET,POST,PUT,DELETE
      - http.cors.allow-headers=X-Requested-With,X-Auth-Token,Content-Type,Content-Length,Authorization
      - transport.host=127.0.0.1
      - cluster.name=docker-cluster
      - discovery.type=single-node
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
    volumes:
      - elasticsearch_data_stage:/usr/share/elasticsearch/data
    networks:
      - api_network
  kibana-stage:
    container_name: kibana-stage
    image: docker.elastic.co/kibana/kibana:7.10.2
    ports:
      - 5601:5601
    networks:
      - api_network
    depends_on:
      - elasticsearch-stage
  logstash-stage:
    container_name: logstash-stage
    ports:
      - 5045:5045
    build:
      dockerfile: Dockerfile.logstash
      context: .
    environment:
      LOGSTASH_JDBC_URL: "jdbc:postgresql://serverip:15433/name"
      LOGSTASH_JDBC_USERNAME: "name"
      LOGSTASH_JDBC_PASSWORD: "password"
      LOGSTASH_ELASTICSEARCH_HOST: "http://elasticsearch-stage:9200"
    volumes:
      - ./logstash.conf:/usr/share/logstash/pipeline/logstash.conf
      - ./offers_template.json:/usr/share/logstash/templates/offers_template.json
      - ./offers_query.sql:/usr/share/logstash/queries/offers_query.sql
    logging:
      driver: "json-file"
      options:
        max-size: "200m"
        max-file: "5"
    networks:
      - api_network
    depends_on:
      - elasticsearch-stage
      - kibana-stage
volumes:
  elasticsearch_data_stage:
networks:
  api_network:
    name: name_api_network_stage

docker-compose.elastic.dev.yml

version: '3.7'
services:
  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:7.10.2
    ports:
      - 9200:9200
    environment:
      - http.cors.enabled=true
      - http.cors.allow-origin=*
      - http.cors.allow-methods=OPTIONS,HEAD,GET,POST,PUT,DELETE
      - http.cors.allow-headers=X-Requested-With,X-Auth-Token,Content-Type,Content-Length,Authorization
      - transport.host=127.0.0.1
      - cluster.name=docker-cluster
      - discovery.type=single-node
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
    volumes:
      - elasticsearch_data:/usr/share/elasticsearch/data
    networks:
      - api_network
  logstash:
    build:
      dockerfile: Dockerfile.logstash
      context: .
    environment:
      LOGSTASH_JDBC_URL: "jdbc:postgresql://serverip:15432/username"
      LOGSTASH_JDBC_USERNAME: "username"
      LOGSTASH_JDBC_PASSWORD: "password"
      LOGSTASH_ELASTICSEARCH_HOST: "http://elasticsearch:9200"
    volumes:
      - ./logstash.conf:/usr/share/logstash/pipeline/logstash.conf
      - ./offers_template.json:/usr/share/logstash/templates/offers_template.json
      - ./offers_query.sql:/usr/share/logstash/queries/offers_query.sql
    logging:
      driver: "json-file"
      options:
        max-size: "200m"
        max-file: "5"
    networks:
      - api_network
    depends_on:
      - elasticsearch
volumes:
  elasticsearch_data:
networks:
  api_network:
    name: name_api_network

我也找到了 this 篇文章,似乎是 similar/same 问题,不幸的是主题在没有确认解决方案的情况下关闭。

logstash.conf

input {
    jdbc {
        jdbc_driver_library => "/usr/share/logstash/logstash-core/lib/jars/postgresql.jar"
        jdbc_driver_class => "org.postgresql.Driver"
        jdbc_connection_string => "${LOGSTASH_JDBC_URL}"
        jdbc_user => "${LOGSTASH_JDBC_USERNAME}"
        jdbc_password => "${LOGSTASH_JDBC_PASSWORD}"
        lowercase_column_names => false
        schedule => "* * * * *"
        statement_filepath => "/usr/share/logstash/queries/offers_query.sql"
    }
}
filter {
    json {
        source => "name"
        target => "name"
    }
    json {
        source => "description"
        target => "description"
    }
    ...
    ...
}
output {
    elasticsearch {
        hosts => ["${LOGSTASH_ELASTICSEARCH_HOST}"]
        index => "offers"
        document_id => "%{id}"
        manage_template => true
        template_name => "offers"
        template => "/usr/share/logstash/templates/offers_template.json"
        template_overwrite => true
    }
    stdout { codec => json_lines }

}

更新: 我在 here 中发现,如果不是 运行 宁默认 logstash 配置,我需要为 logstash environment 设置 XPACK_MONITORING_ENABLED: "false" 和 logstash 无法连接的错误elasticsearch 消失了,但 logstash 仍然没有像往常一样处理来自数据库的数据。现在发生的事情是在 logstash 日志中,每隔几分钟就会从 offers_query.sql 加载纯查询文本。当我输入 elasticsearch_server_ip:9400 我得到这个输出(所以它应该是 运行ning):

{
  "name" : "30ac276f0846",
  "cluster_name" : "docker-cluster",
  "cluster_uuid" : "14mxQTP7S32o-rIrjYSsXw",
  "version" : {
    "number" : "7.10.2",
    "build_flavor" : "default",
    "build_type" : "docker",
    "build_hash" : "747e1cc71def077253878a59143c1f785afa92b9",
    "build_date" : "2021-01-13T00:42:12.435326Z",
    "build_snapshot" : false,
    "lucene_version" : "8.7.0",
    "minimum_wire_compatibility_version" : "6.8.0",
    "minimum_index_compatibility_version" : "6.0.0-beta1"
  },
  "tagline" : "You Know, for Search"
}

据我所知,您的文件中仍然有相同的服务名称,这让 docker-compose up -d 感到困惑。

您的问题是 docker-compose 文件中的服务命名问题。

services:
  elasticsearch
  logstash

在 dev 和 staging compose 上是一样的,因为你不是 运行 swarm,你将需要以下内容: 将 docker-compose 分开到不同的文件夹,以便 docker-compose 可以创建不同的容器名称。

是的,您不能转发主机端口上的相同端口

  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:7.10.2
    ports:
      - 9200:9200

一个弹性搜索应该有 9400:9200 或类似的东西。