如何在客户端和服务器中正确使用 OpenTelemetry 导出器和 OpenTelemetry 收集器?
How to correctly use OpenTelemetry exporter with OpenTelemetry collector in client and server?
我正在尝试让 OpenTelemetry 导出器与 OpenTelemetry 收集器一起工作。
我找到了这个 OpenTelemetry collector demo。
所以我复制了这四个配置文件
- docker-compose.yml(在我的应用程序中,我删除了我目前遇到问题的生成器部分和普罗米修斯 运行)
- otel-agent-config.yaml
- otel-collector-config.yaml
- .env
到我的应用程序。
同样基于 open-telemetry/opentelemetry-js repo 中的这两个演示:
我想出了我的版本(抱歉有点久,由于缺少文档,很难设置最低工作版本):
.env
OTELCOL_IMG=otel/opentelemetry-collector-dev:latest
OTELCOL_ARGS=
docker-compose.yml
version: '3.7'
services:
# Jaeger
jaeger-all-in-one:
image: jaegertracing/all-in-one:latest
ports:
- "16686:16686"
- "14268"
- "14250"
# Zipkin
zipkin-all-in-one:
image: openzipkin/zipkin:latest
ports:
- "9411:9411"
# Collector
otel-collector:
image: ${OTELCOL_IMG}
command: ["--config=/etc/otel-collector-config.yaml", "${OTELCOL_ARGS}"]
volumes:
- ./otel-collector-config.yaml:/etc/otel-collector-config.yaml
ports:
- "1888:1888" # pprof extension
- "8888:8888" # Prometheus metrics exposed by the collector
- "8889:8889" # Prometheus exporter metrics
- "13133:13133" # health_check extension
- "55678" # OpenCensus receiver
- "55680:55679" # zpages extension
depends_on:
- jaeger-all-in-one
- zipkin-all-in-one
# Agent
otel-agent:
image: ${OTELCOL_IMG}
command: ["--config=/etc/otel-agent-config.yaml", "${OTELCOL_ARGS}"]
volumes:
- ./otel-agent-config.yaml:/etc/otel-agent-config.yaml
ports:
- "1777:1777" # pprof extension
- "8887:8888" # Prometheus metrics exposed by the agent
- "14268" # Jaeger receiver
- "55678" # OpenCensus receiver
- "55679:55679" # zpages extension
- "13133" # health_check
depends_on:
- otel-collector
酒店代理-config.yaml
receivers:
opencensus:
zipkin:
endpoint: :9411
jaeger:
protocols:
thrift_http:
exporters:
opencensus:
endpoint: "otel-collector:55678"
insecure: true
logging:
loglevel: debug
processors:
batch:
queued_retry:
extensions:
pprof:
endpoint: :1777
zpages:
endpoint: :55679
health_check:
service:
extensions: [health_check, pprof, zpages]
pipelines:
traces:
receivers: [opencensus, jaeger, zipkin]
processors: [batch, queued_retry]
exporters: [opencensus, logging]
metrics:
receivers: [opencensus]
processors: [batch]
exporters: [logging,opencensus]
otel-collector-config.yaml
receivers:
opencensus:
exporters:
prometheus:
endpoint: "0.0.0.0:8889"
namespace: promexample
const_labels:
label1: value1
logging:
zipkin:
endpoint: "http://zipkin-all-in-one:9411/api/v2/spans"
format: proto
jaeger:
endpoint: jaeger-all-in-one:14250
insecure: true
processors:
batch:
queued_retry:
extensions:
health_check:
pprof:
endpoint: :1888
zpages:
endpoint: :55679
service:
extensions: [pprof, zpages, health_check]
pipelines:
traces:
receivers: [opencensus]
processors: [batch, queued_retry]
exporters: [logging, zipkin, jaeger]
metrics:
receivers: [opencensus]
processors: [batch]
exporters: [logging]
运行docker-compose up -d
后,可以打开Jaeger(http://localhost:16686)和ZipkinUI(http://localhost:9411)
我的 ConsoleSpanExporter
可以在网络客户端和 Express.js 服务器上运行。
但是,我在客户端和服务器中都尝试了这个 OpenTelemetry 导出器代码,我仍然无法连接 OpenTelemetry 收集器。
请在代码中查看我关于 URL 的评论
import { CollectorTraceExporter } from '@opentelemetry/exporter-collector';
// ...
tracerProvider.addSpanProcessor(new SimpleSpanProcessor(new ConsoleSpanExporter()));
tracerProvider.addSpanProcessor(
new SimpleSpanProcessor(
new CollectorTraceExporter({
serviceName: 'my-service',
// url: 'http://localhost:55680/v1/trace', // Return error 404.
// url: 'http://localhost:55681/v1/trace', // No response, not exists.
// url: 'http://localhost:14268/v1/trace', // No response, not exists.
})
)
);
有什么想法吗?谢谢
您尝试的演示使用的是较旧的配置和 opencensus,应将其替换为 otlp 接收器。话虽如此,这是一个有效的例子
https://github.com/open-telemetry/opentelemetry-js/tree/master/examples/collector-exporter-node/docker
所以我从那里复制文件:
docker-compose.yaml
version: "3"
services:
# Collector
collector:
image: otel/opentelemetry-collector:latest
command: ["--config=/conf/collector-config.yaml", "--log-level=DEBUG"]
volumes:
- ./collector-config.yaml:/conf/collector-config.yaml
ports:
- "9464:9464"
- "55680:55680"
- "55681:55681"
depends_on:
- zipkin-all-in-one
# Zipkin
zipkin-all-in-one:
image: openzipkin/zipkin:latest
ports:
- "9411:9411"
# Prometheus
prometheus:
container_name: prometheus
image: prom/prometheus:latest
volumes:
- ./prometheus.yaml:/etc/prometheus/prometheus.yml
ports:
- "9090:9090"
collector-config.yaml
receivers:
otlp:
protocols:
grpc:
http:
exporters:
zipkin:
endpoint: "http://zipkin-all-in-one:9411/api/v2/spans"
prometheus:
endpoint: "0.0.0.0:9464"
processors:
batch:
queued_retry:
service:
pipelines:
traces:
receivers: [otlp]
exporters: [zipkin]
processors: [batch, queued_retry]
metrics:
receivers: [otlp]
exporters: [prometheus]
processors: [batch, queued_retry]
prometheus.yaml
global:
scrape_interval: 15s # Default is every 1 minute.
scrape_configs:
- job_name: 'collector'
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
static_configs:
- targets: ['collector:9464']
这应该适用于 opentelemetry-js 版本。 0.10.2
跟踪的默认端口是 55680,指标的默认端口是 55681
我之前发布的 link - 你总能在那里找到最新的工作示例:
https://github.com/open-telemetry/opentelemetry-js/tree/master/examples/collector-exporter-node
对于网络示例,您可以使用相同的 docker 并在此处查看所有工作示例:
https://github.com/open-telemetry/opentelemetry-js/tree/master/examples/tracer-web/
非常感谢@BObecny 的帮助!这是@BObecny 回答的补充。
因为我对Jaeger 的集成比较感兴趣。所以这是设置所有 Jaeger、Zipkin、Prometheus 的配置。现在它适用于前端和后端。
首先前端和后端都使用相同的导出器代码:
import { CollectorTraceExporter } from '@opentelemetry/exporter-collector';
new SimpleSpanProcessor(
new CollectorTraceExporter({
serviceName: 'my-service',
})
)
docker-compose.yaml
version: "3"
services:
# Collector
collector:
image: otel/opentelemetry-collector:latest
command: ["--config=/conf/collector-config.yaml", "--log-level=DEBUG"]
volumes:
- ./collector-config.yaml:/conf/collector-config.yaml
ports:
- "9464:9464"
- "55680:55680"
- "55681:55681"
depends_on:
- jaeger-all-in-one
- zipkin-all-in-one
# Jaeger
jaeger-all-in-one:
image: jaegertracing/all-in-one:latest
ports:
- "16686:16686"
- "14268"
- "14250"
# Zipkin
zipkin-all-in-one:
image: openzipkin/zipkin:latest
ports:
- "9411:9411"
# Prometheus
prometheus:
container_name: prometheus
image: prom/prometheus:latest
volumes:
- ./prometheus.yaml:/etc/prometheus/prometheus.yml
ports:
- "9090:9090"
collector-config.yaml
receivers:
otlp:
protocols:
grpc:
http:
exporters:
jaeger:
endpoint: jaeger-all-in-one:14250
insecure: true
zipkin:
endpoint: "http://zipkin-all-in-one:9411/api/v2/spans"
prometheus:
endpoint: "0.0.0.0:9464"
processors:
batch:
queued_retry:
service:
pipelines:
traces:
receivers: [otlp]
exporters: [zipkin]
processors: [batch, queued_retry]
metrics:
receivers: [otlp]
exporters: [prometheus]
processors: [batch, queued_retry]
prometheus.yaml
global:
scrape_interval: 15s # Default is every 1 minute.
scrape_configs:
- job_name: 'collector'
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
static_configs:
- targets: ['collector:9464']
我正在尝试让 OpenTelemetry 导出器与 OpenTelemetry 收集器一起工作。
我找到了这个 OpenTelemetry collector demo。
所以我复制了这四个配置文件
- docker-compose.yml(在我的应用程序中,我删除了我目前遇到问题的生成器部分和普罗米修斯 运行)
- otel-agent-config.yaml
- otel-collector-config.yaml
- .env
到我的应用程序。
同样基于 open-telemetry/opentelemetry-js repo 中的这两个演示:
我想出了我的版本(抱歉有点久,由于缺少文档,很难设置最低工作版本):
.env
OTELCOL_IMG=otel/opentelemetry-collector-dev:latest
OTELCOL_ARGS=
docker-compose.yml
version: '3.7'
services:
# Jaeger
jaeger-all-in-one:
image: jaegertracing/all-in-one:latest
ports:
- "16686:16686"
- "14268"
- "14250"
# Zipkin
zipkin-all-in-one:
image: openzipkin/zipkin:latest
ports:
- "9411:9411"
# Collector
otel-collector:
image: ${OTELCOL_IMG}
command: ["--config=/etc/otel-collector-config.yaml", "${OTELCOL_ARGS}"]
volumes:
- ./otel-collector-config.yaml:/etc/otel-collector-config.yaml
ports:
- "1888:1888" # pprof extension
- "8888:8888" # Prometheus metrics exposed by the collector
- "8889:8889" # Prometheus exporter metrics
- "13133:13133" # health_check extension
- "55678" # OpenCensus receiver
- "55680:55679" # zpages extension
depends_on:
- jaeger-all-in-one
- zipkin-all-in-one
# Agent
otel-agent:
image: ${OTELCOL_IMG}
command: ["--config=/etc/otel-agent-config.yaml", "${OTELCOL_ARGS}"]
volumes:
- ./otel-agent-config.yaml:/etc/otel-agent-config.yaml
ports:
- "1777:1777" # pprof extension
- "8887:8888" # Prometheus metrics exposed by the agent
- "14268" # Jaeger receiver
- "55678" # OpenCensus receiver
- "55679:55679" # zpages extension
- "13133" # health_check
depends_on:
- otel-collector
酒店代理-config.yaml
receivers:
opencensus:
zipkin:
endpoint: :9411
jaeger:
protocols:
thrift_http:
exporters:
opencensus:
endpoint: "otel-collector:55678"
insecure: true
logging:
loglevel: debug
processors:
batch:
queued_retry:
extensions:
pprof:
endpoint: :1777
zpages:
endpoint: :55679
health_check:
service:
extensions: [health_check, pprof, zpages]
pipelines:
traces:
receivers: [opencensus, jaeger, zipkin]
processors: [batch, queued_retry]
exporters: [opencensus, logging]
metrics:
receivers: [opencensus]
processors: [batch]
exporters: [logging,opencensus]
otel-collector-config.yaml
receivers:
opencensus:
exporters:
prometheus:
endpoint: "0.0.0.0:8889"
namespace: promexample
const_labels:
label1: value1
logging:
zipkin:
endpoint: "http://zipkin-all-in-one:9411/api/v2/spans"
format: proto
jaeger:
endpoint: jaeger-all-in-one:14250
insecure: true
processors:
batch:
queued_retry:
extensions:
health_check:
pprof:
endpoint: :1888
zpages:
endpoint: :55679
service:
extensions: [pprof, zpages, health_check]
pipelines:
traces:
receivers: [opencensus]
processors: [batch, queued_retry]
exporters: [logging, zipkin, jaeger]
metrics:
receivers: [opencensus]
processors: [batch]
exporters: [logging]
运行docker-compose up -d
后,可以打开Jaeger(http://localhost:16686)和ZipkinUI(http://localhost:9411)
我的 ConsoleSpanExporter
可以在网络客户端和 Express.js 服务器上运行。
但是,我在客户端和服务器中都尝试了这个 OpenTelemetry 导出器代码,我仍然无法连接 OpenTelemetry 收集器。
请在代码中查看我关于 URL 的评论
import { CollectorTraceExporter } from '@opentelemetry/exporter-collector';
// ...
tracerProvider.addSpanProcessor(new SimpleSpanProcessor(new ConsoleSpanExporter()));
tracerProvider.addSpanProcessor(
new SimpleSpanProcessor(
new CollectorTraceExporter({
serviceName: 'my-service',
// url: 'http://localhost:55680/v1/trace', // Return error 404.
// url: 'http://localhost:55681/v1/trace', // No response, not exists.
// url: 'http://localhost:14268/v1/trace', // No response, not exists.
})
)
);
有什么想法吗?谢谢
您尝试的演示使用的是较旧的配置和 opencensus,应将其替换为 otlp 接收器。话虽如此,这是一个有效的例子 https://github.com/open-telemetry/opentelemetry-js/tree/master/examples/collector-exporter-node/docker 所以我从那里复制文件:
docker-compose.yaml
version: "3"
services:
# Collector
collector:
image: otel/opentelemetry-collector:latest
command: ["--config=/conf/collector-config.yaml", "--log-level=DEBUG"]
volumes:
- ./collector-config.yaml:/conf/collector-config.yaml
ports:
- "9464:9464"
- "55680:55680"
- "55681:55681"
depends_on:
- zipkin-all-in-one
# Zipkin
zipkin-all-in-one:
image: openzipkin/zipkin:latest
ports:
- "9411:9411"
# Prometheus
prometheus:
container_name: prometheus
image: prom/prometheus:latest
volumes:
- ./prometheus.yaml:/etc/prometheus/prometheus.yml
ports:
- "9090:9090"
collector-config.yaml
receivers:
otlp:
protocols:
grpc:
http:
exporters:
zipkin:
endpoint: "http://zipkin-all-in-one:9411/api/v2/spans"
prometheus:
endpoint: "0.0.0.0:9464"
processors:
batch:
queued_retry:
service:
pipelines:
traces:
receivers: [otlp]
exporters: [zipkin]
processors: [batch, queued_retry]
metrics:
receivers: [otlp]
exporters: [prometheus]
processors: [batch, queued_retry]
prometheus.yaml
global:
scrape_interval: 15s # Default is every 1 minute.
scrape_configs:
- job_name: 'collector'
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
static_configs:
- targets: ['collector:9464']
这应该适用于 opentelemetry-js 版本。 0.10.2
跟踪的默认端口是 55680,指标的默认端口是 55681
我之前发布的 link - 你总能在那里找到最新的工作示例: https://github.com/open-telemetry/opentelemetry-js/tree/master/examples/collector-exporter-node 对于网络示例,您可以使用相同的 docker 并在此处查看所有工作示例: https://github.com/open-telemetry/opentelemetry-js/tree/master/examples/tracer-web/
非常感谢@BObecny 的帮助!这是@BObecny 回答的补充。
因为我对Jaeger 的集成比较感兴趣。所以这是设置所有 Jaeger、Zipkin、Prometheus 的配置。现在它适用于前端和后端。
首先前端和后端都使用相同的导出器代码:
import { CollectorTraceExporter } from '@opentelemetry/exporter-collector';
new SimpleSpanProcessor(
new CollectorTraceExporter({
serviceName: 'my-service',
})
)
docker-compose.yaml
version: "3"
services:
# Collector
collector:
image: otel/opentelemetry-collector:latest
command: ["--config=/conf/collector-config.yaml", "--log-level=DEBUG"]
volumes:
- ./collector-config.yaml:/conf/collector-config.yaml
ports:
- "9464:9464"
- "55680:55680"
- "55681:55681"
depends_on:
- jaeger-all-in-one
- zipkin-all-in-one
# Jaeger
jaeger-all-in-one:
image: jaegertracing/all-in-one:latest
ports:
- "16686:16686"
- "14268"
- "14250"
# Zipkin
zipkin-all-in-one:
image: openzipkin/zipkin:latest
ports:
- "9411:9411"
# Prometheus
prometheus:
container_name: prometheus
image: prom/prometheus:latest
volumes:
- ./prometheus.yaml:/etc/prometheus/prometheus.yml
ports:
- "9090:9090"
collector-config.yaml
receivers:
otlp:
protocols:
grpc:
http:
exporters:
jaeger:
endpoint: jaeger-all-in-one:14250
insecure: true
zipkin:
endpoint: "http://zipkin-all-in-one:9411/api/v2/spans"
prometheus:
endpoint: "0.0.0.0:9464"
processors:
batch:
queued_retry:
service:
pipelines:
traces:
receivers: [otlp]
exporters: [zipkin]
processors: [batch, queued_retry]
metrics:
receivers: [otlp]
exporters: [prometheus]
processors: [batch, queued_retry]
prometheus.yaml
global:
scrape_interval: 15s # Default is every 1 minute.
scrape_configs:
- job_name: 'collector'
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
static_configs:
- targets: ['collector:9464']