Flume 到 HDFS 将一个文件分割成很多文件

Flume to HDFS split a file to lots of files

我正在尝试将一个 700 MB 的日志文件从 flume 传输到 HDFS。 我已将 flume 代理配置如下:

...
tier1.channels.memory-channel.type = memory
...
tier1.sinks.hdfs-sink.channel = memory-channel
tier1.sinks.hdfs-sink.type = hdfs
tier1.sinks.hdfs-sink.path = hdfs://***
tier1.sinks.hdfs-sink.fileType = DataStream
tier1.sinks.hdfs-sink.rollSize = 0

源是 spooldir,通道是 memory,接收器是 hdfs

我也曾尝试发送一个 1MB 的文件,然后 flume 将其拆分为 1000 个文件,每个文件的大小为 1KB。 我注意到的另一件事是传输速度非常慢,1MB 大约需要 1 分钟。 我做错了什么吗?

您还需要禁用滚动超时,这是通过以下设置完成的:

tier1.sinks.hdfs-sink.hdfs.rollCount = 0
tier1.sinks.hdfs-sink.hdfs.rollInterval = 300

rollcount 防止翻转,rollIntervall 这里设置为 300 秒,设置为 0 将禁用超时。你必须选择你想要的翻转机制,否则 Flume 只会在关机时关闭文件。

默认值如下:

hdfs.rollInterval   30  Number of seconds to wait before rolling current file (0 = never roll based on time interval)
hdfs.rollSize   1024    File size to trigger roll, in bytes (0: never roll based on file size)
hdfs.rollCount  10  Number of events written to file before it rolled (0 = never roll based on number of events)