docker-使用一个节点导出器编写基本的 Prometheus / Grafana 示例
docker-compose basic Prometheus / Grafana example with one node exporter
问题:如何配置Prometheus服务器从node exporter拉取数据?
我已经成功地在 Grafana 上设置了数据源,并看到了带有以下 docker-compose.yml
的默认仪表板。这 3 项服务是:
- 普罗米修斯服务器
- 节点导出器
- Grafana
Dockerfile:
version: '2'
services:
prometheus_srv:
image: prom/prometheus
container_name: prometheus_server
hostname: prometheus_server
prometheus_node:
image: prom/node-exporter
container_name: prom_node_exporter
hostname: prom_node_exporter
depends_on:
- prometheus_srv
grafana:
image: grafana/grafana
container_name: grafana_server
hostname: grafana_server
depends_on:
- prometheus_srv
编辑:
我使用了类似于 @Daniel Lee 共享的东西,它似乎有效:
# my global config
global:
scrape_interval: 10s # By default, scrape targets every 15 seconds.
evaluation_interval: 10s # By default, scrape targets every 15 seconds.
scrape_configs:
# Scrape Prometheus itself
- job_name: 'prometheus'
scrape_interval: 10s
scrape_timeout: 10s
static_configs:
- targets: ['localhost:9090']
# Scrape the Node Exporter
- job_name: 'node'
scrape_interval: 10s
static_configs:
- targets: ['prom_node_exporter:9100']
在YAML configuration file, here is an example from the Grafana test instance of Prometheus。
docker 文件:
FROM prom/prometheus
ADD prometheus.yml /etc/prometheus/
YAML 文件:
# my global config
global:
scrape_interval: 10s # By default, scrape targets every 15 seconds.
evaluation_interval: 10s # By default, scrape targets every 15 seconds.
# scrape_timeout is set to the global default (10s).
# Load and evaluate rules in this file every 'evaluation_interval' seconds.
rule_files:
# - "first.rules"
# - "second.rules"
# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
# The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
- job_name: 'prometheus'
# Override the global default and scrape targets from this job every 5 seconds.
scrape_interval: 10s
scrape_timeout: 10s
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
static_configs:
#- targets: ['localhost:9090', '172.17.0.1:9091', '172.17.0.1:9100', '172.17.0.1:9150']
- targets: ['localhost:9090', '127.0.0.1:9091', '127.0.0.1:9100', '127.0.0.1:9150']
问题:如何配置Prometheus服务器从node exporter拉取数据?
我已经成功地在 Grafana 上设置了数据源,并看到了带有以下 docker-compose.yml
的默认仪表板。这 3 项服务是:
- 普罗米修斯服务器
- 节点导出器
- Grafana
Dockerfile:
version: '2'
services:
prometheus_srv:
image: prom/prometheus
container_name: prometheus_server
hostname: prometheus_server
prometheus_node:
image: prom/node-exporter
container_name: prom_node_exporter
hostname: prom_node_exporter
depends_on:
- prometheus_srv
grafana:
image: grafana/grafana
container_name: grafana_server
hostname: grafana_server
depends_on:
- prometheus_srv
编辑:
我使用了类似于 @Daniel Lee 共享的东西,它似乎有效:
# my global config
global:
scrape_interval: 10s # By default, scrape targets every 15 seconds.
evaluation_interval: 10s # By default, scrape targets every 15 seconds.
scrape_configs:
# Scrape Prometheus itself
- job_name: 'prometheus'
scrape_interval: 10s
scrape_timeout: 10s
static_configs:
- targets: ['localhost:9090']
# Scrape the Node Exporter
- job_name: 'node'
scrape_interval: 10s
static_configs:
- targets: ['prom_node_exporter:9100']
在YAML configuration file, here is an example from the Grafana test instance of Prometheus。
docker 文件:
FROM prom/prometheus
ADD prometheus.yml /etc/prometheus/
YAML 文件:
# my global config
global:
scrape_interval: 10s # By default, scrape targets every 15 seconds.
evaluation_interval: 10s # By default, scrape targets every 15 seconds.
# scrape_timeout is set to the global default (10s).
# Load and evaluate rules in this file every 'evaluation_interval' seconds.
rule_files:
# - "first.rules"
# - "second.rules"
# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
# The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
- job_name: 'prometheus'
# Override the global default and scrape targets from this job every 5 seconds.
scrape_interval: 10s
scrape_timeout: 10s
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
static_configs:
#- targets: ['localhost:9090', '172.17.0.1:9091', '172.17.0.1:9100', '172.17.0.1:9150']
- targets: ['localhost:9090', '127.0.0.1:9091', '127.0.0.1:9100', '127.0.0.1:9150']