Apache Kafka:生产者未生产所有数据

Apache Kafka: Producer Not Producing All Data

我是卡夫卡的新手。我的要求是,我在数据库源和目标中有两个 table。现在我想从源 table 获取数据并将其存储到这些 kafka 之间的目的地将作为生产者和消费者工作。我已经完成了代码,但问题是当生产者生成数据时,一些数据会丢失。例如,如果我在源 table 中有 100 条记录,那么它不会生成所有 100 条记录。我正在使用 Kafka-0.10

MyProducer 配置-

bootstrap.servers=192.168.1.XXX:9092,192.168.1.XXX:9093,192.168.1.XXX:9094
acks=all
retries=2
batch.size=16384
linger.ms=2
buffer.memory=33554432
key.serializer=org.apache.kafka.common.serialization.IntegerSerializer
value.serializer=org.apache.kafka.common.serialization.StringSerializer

我的生产者代码:-

public void run() {
    SourceDAO sourceDAO = new SourceDAO();
    Source source;
    int id;
    try {
        logger.debug("INSIDE RUN");
        List<Source> listOfEmployee = sourceDAO.getAllSource();
        Iterator<Source> sourceIterator = listOfEmployee.iterator();
        String sourceJson;
        Gson gson = new Gson();
        while(sourceIterator.hasNext()) {
            source = sourceIterator.next();
            sourceJson = gson.toJson(source);
            id = source.getId();
            producerRecord = new ProducerRecord<Integer, String>(TOPIC, id, sourceJson);
            producerRecords.add(producerRecord);
        }

        for(ProducerRecord<Integer, String> record : producerRecords) {
            logger.debug("Producer Record: " + record.value());
            producer.send(record, new Callback() {
                @Override
                public void onCompletion(RecordMetadata metadata, Exception exception) {
                    logger.debug("Exception: " + exception);
                    if (exception != null)
                        throw new RuntimeException(exception.getMessage());
                    logger.info("The offset of the record we just sent is: " + metadata.offset()
                            + " In Partition : " + metadata.partition());
                }
            });
        }
        producer.close();
        producer.flush();
        logger.info("Size of Record: " + producerRecords.size());
    } catch (SourceServiceException e) {
        logger.error("Unable to Produce data...", e);
        throw new RuntimeException("Unable to Produce data...", e);
    }
}

我的消费者配置:-

bootstrap.servers=192.168.1.XXX:9092,192.168.1.231:XXX,192.168.1.232:XXX
group.id=consume
client.id=C1
enable.auto.commit=true
auto.commit.interval.ms=1000
max.partition.fetch.bytes=10485760
session.timeout.ms=35000
consumer.timeout.ms=35000
auto.offset.reset=earliest
message.max.bytes=10000000
key.deserializer=org.apache.kafka.common.serialization.IntegerDeserializer

value.deserializer=org.apache.kafka.common.serialization.StringDeserializer

消费者代码:-

public void doWork() {
    logger.debug("Inside doWork of DestinationConsumer");
    DestinationDAO destinationDAO = new DestinationDAO();
    consumer.subscribe(Collections.singletonList(this.TOPIC));
    while(true) {
        ConsumerRecords<String, String> consumerRecords = consumer.poll(1000);
        int minBatchSize = 1;
        for(ConsumerRecord<String, String> rec : consumerRecords) {
            logger.debug("Consumer Recieved Record: " + rec);
            consumerRecordsList.add(rec);
        }
        logger.debug("Record Size: " + consumerRecordsList.size());
        if(consumerRecordsList.size() >= minBatchSize) {
            try {
                destinationDAO.insertSourceDataIntoDestination(consumerRecordsList);
            } catch (DestinationServiceException e) {
                logger.error("Unable to update destination table");
            }
        }
    }
}

从这里可以看出,我猜你没有刷新或关闭生产者。你应该注意发送运行异步并且只准备一个稍后发送的批处理(取决于你的生产者的配置):

来自kafka documentation

The send() method is asynchronous. When called it adds the record to a buffer of pending record sends and immediately returns. This allows the producer to batch together individual records for efficiency.

您应该尝试在遍历所有 producerRecords 后调用 producer.close()(顺便说一句:为什么要缓存整个 producerRecords,当您有很多记录时可能会导致问题)。

如果这没有帮助,您应该尝试使用例如控制台消费者找出缺少的内容。请提供更多代码。生产者是如何配置的?你的消费者是什么样的? producerRecords 的类型是什么?

希望对您有所帮助。