Kafka batch listener,轮询固定数量的记录(尽可能多)

Kafka batch listener, polling fixed numbers of records ( as much as possible)

我正在使用 Spring 引导版本 1.5.4.RELEASE & spring Kafka 版本 1.3.8.RELEASE.

我的 Kafka 消费者正在以 100 个为一组进行批处理。我尝试使用的主题有 10 个分区,我确实有 10 个 Kafka 消费者实例。

我是否可以强制获取 100 个固定数量的记录(尽可能多),除了特定分区中的最后一个块。

Kafka没有属性fetch.min.records.

你能做的最好的就是模拟它:

fetch.min.bytes: The minimum amount of data the server should return for a fetch request. If insufficient data is available the request will wait for that much data to accumulate before answering the request. The default setting of 1 byte means that fetch requests are answered as soon as a single byte of data is available or the fetch request times out waiting for data to arrive. Setting this to something greater than 1 will cause the server to wait for larger amounts of data to accumulate which can improve server throughput a bit at the cost of some additional latency.

fetch.max.wait.ms: The maximum amount of time the server will block before answering the fetch request if there isn't sufficient data to immediately satisfy the requirement given by fetch.min.bytes.

如果您的记录具有相似的长度,这将起作用。

顺便说一句 Spring Boot 1.5.x 已停产,不再受支持。当前的 Boot 版本是 2.2.3.