kafka 在两个 tomcat 平衡池中创建客户
kafka create costumer in pooling with two tomcat balancing
我有两个 tomcat 和一个网络应用程序。
我 运行 kafka 和 zookeeper 服务 docker,我 运行 tomcats。
在 kafka 控制台中,我看到第二个使用此消息创建了 2 个消费者:
kafka_1 | [2019-12-20 16:30:20,725] INFO [GroupCoordinator 1001]: Stabilized group 1001 generation 12902 (__consumer_offsets-24) (kafka.coordinator.group.GroupCoordinator)
kafka_1 | [2019-12-20 16:30:20,730] INFO [GroupCoordinator 1001]: Assignment received from leader for group 1001 for generation 12902 (kafka.coordinator.group.GroupCoordinator)
kafka_1 | [2019-12-20 16:30:21,059] INFO [GroupCoordinator 1001]: Preparing to rebalance group 1001 in state PreparingRebalance with old generation 12902 (__consumer_offsets-24) (reason: Adding new member consumer-1-5c607368-a22c-44dd-b460-6f33101e3e7a with group instanceid None) (kafka.coordinator.group.GroupCoordinator)
kafka_1 | [2019-12-20 16:30:21,060] INFO [GroupCoordinator 1001]: Stabilized group 1001 generation 12903 (__consumer_offsets-24) (kafka.coordinator.group.GroupCoordinator)
kafka_1 | [2019-12-20 16:30:21,063] INFO [GroupCoordinator 1001]: Assignment received from leader for group 1001 for generation 12903 (kafka.coordinator.group.GroupCoordinator)
kafka_1 | [2019-12-20 16:30:21,749] INFO [GroupCoordinator 1001]: Preparing to rebalance group 1001 in state PreparingRebalance with old generation 12903 (__consumer_offsets-24) (reason: Adding new member consumer-1-01c204d3-0e36-487e-ac13-374aaf4d84fd with group instanceid None) (kafka.coordinator.group.GroupCoordinator)
kafka_1 | [2019-12-20 16:30:21,751] INFO [GroupCoordinator 1001]: Stabilized group 1001 generation 12904 (__consumer_offsets-24) (kafka.coordinator.group.GroupCoordinator)
kafka_1 | [2019-12-20 16:30:21,754] INFO [GroupCoordinator 1001]: Assignment received from leader for group 1001 for generation 12904 (kafka.coordinator.group.GroupCoordinator)
kafka_1 | [2019-12-20 16:30:22,081] INFO [GroupCoordinator 1001]: Preparing to rebalance group 1001 in state PreparingRebalance with old generation 12904 (__consumer_offsets-24) (reason: Adding new member consumer-1-4993cf30-5924-47db-9c63-2b1008f98924 with group instanceid None) (kafka.coordinator.group.GroupCoordinator)
我用这个docker-compose.yml
version: '2'
services:
zookeeper:
image: wurstmeister/zookeeper
ports:
- "2181:2181"
kafka:
build: .
ports:
- "9092:9092"
environment:
KAFKA_ADVERTISED_HOST_NAME: 127.0.0.1
KAFKA_CREATE_TOPICS: "clinicaleventmanager:1:1"
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
volumes:
- /var/run/docker.sock:/var/run/docker.sock
如果我运行只有一个tomcat,这个问题就不存在了。
为什么?
我怎样才能避免它?
谢谢
这种情况正在发生,因为每当您将新消费者附加到同一主题时,就会发生重新平衡。 Topic
只是 Partitions
前面的一层。实际上,当您订阅新的消费者时,它将订阅该分区。 Kafka 是以这种方式设计的,因为顺序很重要,只有当你的消费者不超过分区时你才能维持秩序(你不能有超过 1 个消费者从同一个分区消费)。这就是您看到该日志的原因。
已解决!
问题是在 kafka.properties 中,每个 tomcat 的属性 group.id 必须不同。
我从属性文件和 Magic 中删除了 group.id!
我有两个 tomcat 和一个网络应用程序。 我 运行 kafka 和 zookeeper 服务 docker,我 运行 tomcats。 在 kafka 控制台中,我看到第二个使用此消息创建了 2 个消费者:
kafka_1 | [2019-12-20 16:30:20,725] INFO [GroupCoordinator 1001]: Stabilized group 1001 generation 12902 (__consumer_offsets-24) (kafka.coordinator.group.GroupCoordinator)
kafka_1 | [2019-12-20 16:30:20,730] INFO [GroupCoordinator 1001]: Assignment received from leader for group 1001 for generation 12902 (kafka.coordinator.group.GroupCoordinator)
kafka_1 | [2019-12-20 16:30:21,059] INFO [GroupCoordinator 1001]: Preparing to rebalance group 1001 in state PreparingRebalance with old generation 12902 (__consumer_offsets-24) (reason: Adding new member consumer-1-5c607368-a22c-44dd-b460-6f33101e3e7a with group instanceid None) (kafka.coordinator.group.GroupCoordinator)
kafka_1 | [2019-12-20 16:30:21,060] INFO [GroupCoordinator 1001]: Stabilized group 1001 generation 12903 (__consumer_offsets-24) (kafka.coordinator.group.GroupCoordinator)
kafka_1 | [2019-12-20 16:30:21,063] INFO [GroupCoordinator 1001]: Assignment received from leader for group 1001 for generation 12903 (kafka.coordinator.group.GroupCoordinator)
kafka_1 | [2019-12-20 16:30:21,749] INFO [GroupCoordinator 1001]: Preparing to rebalance group 1001 in state PreparingRebalance with old generation 12903 (__consumer_offsets-24) (reason: Adding new member consumer-1-01c204d3-0e36-487e-ac13-374aaf4d84fd with group instanceid None) (kafka.coordinator.group.GroupCoordinator)
kafka_1 | [2019-12-20 16:30:21,751] INFO [GroupCoordinator 1001]: Stabilized group 1001 generation 12904 (__consumer_offsets-24) (kafka.coordinator.group.GroupCoordinator)
kafka_1 | [2019-12-20 16:30:21,754] INFO [GroupCoordinator 1001]: Assignment received from leader for group 1001 for generation 12904 (kafka.coordinator.group.GroupCoordinator)
kafka_1 | [2019-12-20 16:30:22,081] INFO [GroupCoordinator 1001]: Preparing to rebalance group 1001 in state PreparingRebalance with old generation 12904 (__consumer_offsets-24) (reason: Adding new member consumer-1-4993cf30-5924-47db-9c63-2b1008f98924 with group instanceid None) (kafka.coordinator.group.GroupCoordinator)
我用这个docker-compose.yml
version: '2'
services:
zookeeper:
image: wurstmeister/zookeeper
ports:
- "2181:2181"
kafka:
build: .
ports:
- "9092:9092"
environment:
KAFKA_ADVERTISED_HOST_NAME: 127.0.0.1
KAFKA_CREATE_TOPICS: "clinicaleventmanager:1:1"
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
volumes:
- /var/run/docker.sock:/var/run/docker.sock
如果我运行只有一个tomcat,这个问题就不存在了。 为什么? 我怎样才能避免它? 谢谢
这种情况正在发生,因为每当您将新消费者附加到同一主题时,就会发生重新平衡。 Topic
只是 Partitions
前面的一层。实际上,当您订阅新的消费者时,它将订阅该分区。 Kafka 是以这种方式设计的,因为顺序很重要,只有当你的消费者不超过分区时你才能维持秩序(你不能有超过 1 个消费者从同一个分区消费)。这就是您看到该日志的原因。
已解决! 问题是在 kafka.properties 中,每个 tomcat 的属性 group.id 必须不同。
我从属性文件和 Magic 中删除了 group.id!