Fluentd - redis_output(或 elasticsearch)与多进程工作者的文件缓冲区设置
Fluentd - file buffer setup for redis_output (or elasticsearch) with multiprocess workers
有人可以帮助我如何在 fluentd 中为多进程工作进程配置 file
缓冲区吗?
我使用此配置,但是当我将 @type file+id
添加到 redis_store
插件的缓冲区时,它会抛出此错误:
failed to configure sub output copy: Plugin 'file' does not support multi workers configuration"
没有 id
它失败了:
failed to configure sub output copy: Other 'redis_store' plugin already use same buffer path
但是 path
中有一个标签,它适用于不同的输出(文件),它不仅仅适用于 Redis 输出。
我不想为此使用默认内存缓冲区,因为当数据太多时会增加内存。是否可以配置此组合? (redis_store
插件或 Elasticsearch 插件的多进程+文件缓冲区?)
配置:
<system>
workers 4
root_dir /fluentd/log/buffer/
</system>
<worker 0-3>
<source>
@type forward
bind 0.0.0.0
port 9880
</source>
<label @TEST>
<match test.**>
@type forest
subtype copy
<template>
<store>
@type file
@id "file_${tag_parts[2]}/${tag_parts[3]}/${tag_parts[3]}-#{worker_id}"
@log_level debug
path "fluentd/log/${tag_parts[2]}/${tag_parts[3]}/${tag_parts[3]}-#{worker_id}.*.log"
append true
<buffer>
flush_mode interval
flush_interval 3
flush_at_shutdown true
</buffer>
<format>
@type single_value
message_key log
</format>
</store>
<store>
@type redis_store
host server_ip
port 6379
key test
store_type list
<buffer>
#@type file CANT USE
#id test_${tag_parts[2]}/${tag_parts[3]}/${tag_parts[3]}-#{worker_id} WITH ID - DOESNT SUPPORT MULTIPROCESS..
#path fluentd/log/${tag_parts[2]}/${tag_parts[3]}/${tag_parts[3]}-#{worker_id}.*.log WITHOUT ID - OTHER PLUGIN USE SAME BUFFER PATH
flush_mode interval
flush_interval 3
flush_at_shutdown true
flush_thread_count 4
</buffer>
</store>
</template>
</match>
</label>
</worker>
版本:
- Fluentd v1.14.3
- fluent-plugin-redis-store v0.2.0
- fluent-plugin-forest v0.3.3
谢谢!
redis_store 配置错误,正确版本的 ID 在 FIRST @type:
下
<store>
@type redis_store
@id test_${tag_parts[2]}/${tag_parts[3]}/${tag_parts[3]}-#{worker_id}
host server_ip
port 6379
key test
store_type list
<buffer>
@type file
flush_mode interval
flush_interval 3
flush_at_shutdown true
flush_thread_count 4
</buffer>
</store>
谢谢你抽出时间 Azeem :)
有人可以帮助我如何在 fluentd 中为多进程工作进程配置 file
缓冲区吗?
我使用此配置,但是当我将 @type file+id
添加到 redis_store
插件的缓冲区时,它会抛出此错误:
failed to configure sub output copy: Plugin 'file' does not support multi workers configuration"
没有 id
它失败了:
failed to configure sub output copy: Other 'redis_store' plugin already use same buffer path
但是 path
中有一个标签,它适用于不同的输出(文件),它不仅仅适用于 Redis 输出。
我不想为此使用默认内存缓冲区,因为当数据太多时会增加内存。是否可以配置此组合? (redis_store
插件或 Elasticsearch 插件的多进程+文件缓冲区?)
配置:
<system>
workers 4
root_dir /fluentd/log/buffer/
</system>
<worker 0-3>
<source>
@type forward
bind 0.0.0.0
port 9880
</source>
<label @TEST>
<match test.**>
@type forest
subtype copy
<template>
<store>
@type file
@id "file_${tag_parts[2]}/${tag_parts[3]}/${tag_parts[3]}-#{worker_id}"
@log_level debug
path "fluentd/log/${tag_parts[2]}/${tag_parts[3]}/${tag_parts[3]}-#{worker_id}.*.log"
append true
<buffer>
flush_mode interval
flush_interval 3
flush_at_shutdown true
</buffer>
<format>
@type single_value
message_key log
</format>
</store>
<store>
@type redis_store
host server_ip
port 6379
key test
store_type list
<buffer>
#@type file CANT USE
#id test_${tag_parts[2]}/${tag_parts[3]}/${tag_parts[3]}-#{worker_id} WITH ID - DOESNT SUPPORT MULTIPROCESS..
#path fluentd/log/${tag_parts[2]}/${tag_parts[3]}/${tag_parts[3]}-#{worker_id}.*.log WITHOUT ID - OTHER PLUGIN USE SAME BUFFER PATH
flush_mode interval
flush_interval 3
flush_at_shutdown true
flush_thread_count 4
</buffer>
</store>
</template>
</match>
</label>
</worker>
版本:
- Fluentd v1.14.3
- fluent-plugin-redis-store v0.2.0
- fluent-plugin-forest v0.3.3
谢谢!
redis_store 配置错误,正确版本的 ID 在 FIRST @type:
下<store>
@type redis_store
@id test_${tag_parts[2]}/${tag_parts[3]}/${tag_parts[3]}-#{worker_id}
host server_ip
port 6379
key test
store_type list
<buffer>
@type file
flush_mode interval
flush_interval 3
flush_at_shutdown true
flush_thread_count 4
</buffer>
</store>
谢谢你抽出时间 Azeem :)