Kafka 创建比属性文件中硬编码更多的分区

Kafka creating more partitions than hardcoded in the properties file

我已将制作人配置为:

spring.cloud.stream.bindings.pc-abc-out-0.destination=pc-abc-my-topic
spring.cloud.stream.bindings.pc-abc-out-0.producer.partition-count=5
spring.cloud.stream.bindings.pc-abc-out-0.producer.header-mode=headers
spring.cloud.stream.bindings.pc-abc-out-0.producer.partition-count=10
spring.cloud.stream.bindings.pc-abc-out-0.producer.partitionKeyExpression=payload.key
spring.cloud.stream.kafka.bindings.pc-abc-out-0.producer.sync=true

但是,在 kafka 日志中,我不断收到错误消息:


o.s.kafka.support.LoggingProducerListener - Exception thrown when sending a message with key='byte[14]' and payload='byte[253]' to *topic pc-abc-my-topic and partition 8*: org.apache.kafka.common.errors.TimeoutException: Topic pc-abc-my-topic not present in metadata after 60000 ms.

错误消息中的重点是:主题 pc-abc-my-topic 和分区 8

为什么它在寻找分区 8,即使我已经将分区数定义为 5。编号不应该在 0-4 之间吗?我还有其他几个分区号超过 5 的错误消息。

我之前在配置中添加了

spring.cloud.stream.kafka.binder.auto-add-partitions=true

但我删除了它,我们缩小和扩大了服务。问题仍然存在。这是陈旧配置的情况吗?

我猜你首先创建了 8 个分区的主题。如果主题已经存在 spring-kafka 不会使用您的分区配置创建新主题。

这里是文档https://docs.spring.io/spring-cloud-stream/docs/Brooklyn.RELEASE/reference/html/_apache_kafka_binder.html#:~:text=Default%3A%20Empty%20map.-,The,-Kafka%20binder%20will

The Kafka binder will use the partitionCount setting of the producer as a hint to create a topic with the given partition count (in conjunction with the minPartitionCount, the maximum of the two being the value being used). Exercise caution when configuring both minPartitionCount for a binder and partitionCount for an application, as the larger value will be used. If a topic already exists with a smaller partition count and autoAddPartitions is disabled (the default), then the binder will fail to start.
If a topic already exists with a smaller partition count and autoAddPartitions is enabled, new partitions will be added. If a topic already exists with a larger number of partitions than the maximum of (minPartitionCount and partitionCount), the existing partition count will be used.

您真正的问题是 主题 pc-abc-my-topic 在 60000 毫秒后不在元数据中。 不关心分区计数。

此处解决此问题

https://developpaper.com/topic-xxx-not-present-in-metadata-after-60000-ms/

问题原来是这个配置:

spring.cloud.stream.bindings.pc-abc-out-0.producer.partitionKeyExpression=payload.key
spring.cloud.stream.kafka.bindings.pc-abc-out-0.producer.sync=true

由于 payload.key 而计算的分区数比我们在基础设施上配置和创建的 partitionCount 值要大一些。删除这两个配置停止了问题。