Spring 如果使用不同的线程,Cloud Stream 可轮询消费者 dlq 和 errorChannel 将不起作用

Spring Cloud Stream pollable consumer dlq and errorChannel don't work if a different thread is being used

为了使用 Spring Cloud Stream 3.1.1 和 Kafka binder 管理一个长 运行 任务,我们需要使用 Pollable Consumer 在单独的线程中手动管理消费,因此Kafka 不会触发重新平衡。为此,我们定义了一个新的注解来管理可轮询的消费者。这种方法的问题是因为需要在单独的线程中管理工作,抛出的任何异常最终都不会在 errorChannel 和 DLQ 中结束。

  private final ExecutorService executor = Executors.newFixedThreadPool(1);

  private volatile boolean paused = false;

  @Around(value = "@annotation(pollableConsumer) && args(dataCapsule,..)")
  public void handleMessage(ProceedingJoinPoint joinPoint,
      PollableConsumer pollableConsumer, Object dataCapsule) {
    if (dataCapsule instanceof Message) {
      Message<?> message = (Message<?>) dataCapsule;
      AcknowledgmentCallback callback = StaticMessageHeaderAccessor
          .getAcknowledgmentCallback(message);
      callback.noAutoAck();

      if (!paused) {
        // The separate thread is not busy with a previous message, so process this message:
        Runnable runnable = () -> {
          try {
            paused = true;

            // Call method to process this Kafka message
            joinPoint.proceed();

            callback.acknowledge(Status.ACCEPT);
          } catch (Throwable e) {
            callback.acknowledge(Status.REJECT);
            throw new PollableConsumerException(e);
          } finally {
            paused = false;
          }
        };

        executor.submit(runnable);
      } else {  

        // The separate thread is busy with a previous message, so re-queue this message for later:
        callback.acknowledge(Status.REQUEUE);
      }
    }
  }

我们可以创建一个不同的输出通道来发布异常情况下的消息,但感觉我们正在尝试实现可能没有必要的东西。

更新 1

我们添加了这些 bean:

  @Bean
  public KafkaTemplate<String, byte[]> kafkaTemplate() {
    return new KafkaTemplate<>(producerFactory());
  }
  @Bean
  public ProducerFactory<String, byte[]> producerFactory() {
    Map<String, Object> configProps = new HashMap<>();
    configProps.put(
        org.apache.kafka.clients.producer.ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,
        "http://localhost:9092");
    configProps.put(
        org.apache.kafka.clients.producer.ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,
        StringSerializer.class);
    configProps.put(
        org.apache.kafka.clients.producer.ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,
        KafkaAvroSerializer.class);
    return new DefaultKafkaProducerFactory<>(configProps);
  }
  @Bean
  public KafkaAdmin admin() {
    Map<String, Object> configs = new HashMap<>();
    configs.put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG, "http://localhost:9092");
    return new KafkaAdmin(configs);
  }
  @Bean
  public NewTopic topicErr() {
    return TopicBuilder.name("ERR").partitions(1).replicas(1).build();
  }
  @Bean
  public SeekToCurrentErrorHandler eh(KafkaOperations<String, byte[]> template) {
    return new SeekToCurrentErrorHandler(new DeadLetterPublishingRecoverer(
        template,
        (cr, e) -> new TopicPartition("ERR", 1)),
        new FixedBackOff(0L, 1L));
  }

并且enable-dlq没有在spring.cloud.stream.kafka.bindings.channel-name.consumer中设置 但是我们仍然看不到任何向 ERR 主题生成的消息。 即使对于主线程抛出的任何异常。

如果 enable-dlq 设置为 true,主线程上的异常将发布到默认的 dlq 主题中,并且正如预期的那样,子线程上的异常将被忽略。

更新 2

Gary​​ 的示例似乎工作正常。虽然我们需要做一些修改,因为我们使用已弃用的 StreamListner 方法而不是 Functions,但仍有一些问题我们无法通过我们的案例解决。

您的观察是正确的;错误处理绑定到线程。

您可以直接在您的代码中使用 DeadLetterPublishingRecoverer 来更轻松地发布 DLQ(而不是输出通道)。这样,您将获得带有异常信息等的增强 headers

https://docs.spring.io/spring-kafka/docs/current/reference/html/#dead-letters

编辑

这是一个例子;我正在暂停绑定以防止在“工作”运行 期间出现任何新的交付,而不是像您正在做的那样重新排队交付。

@SpringBootApplication
@EnableScheduling
public class So67296258Application {

    public static void main(String[] args) {
        SpringApplication.run(So67296258Application.class, args);
    }

    @Bean
    TaskExecutor exec() {
        return new ThreadPoolTaskExecutor();
    }

    @Bean
    DeadLetterPublishingRecoverer recoverer(KafkaOperations<Object, Object> template) {
        return new DeadLetterPublishingRecoverer(template);
    }

    @Bean
    NewTopic topic() {
        return TopicBuilder.name("polled.DLT").partitions(1).replicas(1).build();
    }

    @Bean
    MessageSourceCustomizer<KafkaMessageSource<?, ?>> customizer() {
        return (source, dest, group) -> source.setRawMessageHeader(true);
    }

}

@Component
class Handler {

    private static final Logger LOG = LoggerFactory.getLogger(Handler.class);

    private final PollableMessageSource source;

    private final TaskExecutor exec;

    private final BindingsEndpoint endpoint;

    private final DeadLetterPublishingRecoverer recoverer;

    Handler(PollableMessageSource source, TaskExecutor exec, BindingsEndpoint endpoint,
            DeadLetterPublishingRecoverer recoverer) {

        this.source = source;
        this.exec = exec;
        this.endpoint = endpoint;
        this.recoverer = recoverer;
    }

    @Scheduled(fixedDelay = 5_000)
    public void process() {
        LOG.info("Polling");
        boolean polled = this.source.poll(msg -> {
            LOG.info("Pausing Binding");
            this.endpoint.changeState("polled", State.PAUSED);
            AcknowledgmentCallback callback = StaticMessageHeaderAccessor.getAcknowledgmentCallback(msg);
            callback.noAutoAck();
//          LOG.info(msg.toString());
            this.exec.execute(() -> {
                try {
                    runJob(msg);
                }
                catch (Exception e) {
                    this.recoverer.accept(msg.getHeaders().get(KafkaHeaders.RAW_DATA, ConsumerRecord.class), e);
                }
                finally {
                    callback.acknowledge();
                    this.endpoint.changeState("polled", State.RESUMED);
                    LOG.info("Resumed Binding");
                }
            });
        });
        LOG.info("" + polled);
    }

    private void runJob(Message<?> msg) throws InterruptedException {
        LOG.info("Running job");
        Thread.sleep(30_000);
        throw new RuntimeException("fail");
    }

}
spring.cloud.stream.pollable-source=polled
spring.cloud.stream.bindings.polled-in-0.destination=polled
spring.cloud.stream.bindings.polled-in-0.group=polled

EDIT2

补充问题的答案:

1、2:请参阅 Apache Kafka 文档的 Spring:https://docs.spring.io/spring-kafka/docs/current/reference/html/#dead-letters

DLPR 有一个备用构造函数,使您能够指定目标解析器。默认只是附加 .DLT 并使用相同的分区。 javadocs 指定了如何指定目标分区:

    /**
     * Create an instance with the provided template and destination resolving function,
     * that receives the failed consumer record and the exception and returns a
     * {@link TopicPartition}. If the partition in the {@link TopicPartition} is less than
     * 0, no partition is set when publishing to the topic.
     * @param template the {@link KafkaOperations} to use for publishing.
     * @param destinationResolver the resolving function.
     */

null时,KafkaProducer选择分区。

  1. 使用适当的重试和退避策略连接 RetryTemplate;然后
retryTemplate.execute(context -> { ... },
    context -> {...});

第二个参数是 RecoveryCallback,在重试次数耗尽时调用。

  1. 效率更高。使用您的解决方案,您可以在处理上一个任务的同时继续检索和重新排队交付。通过暂停绑定,我们告诉 kafka 在 poll() 时不要再发送任何记录,直到我们恢复消费者。这允许我们通过轮询让消费者保持活动状态,但没有检索和重置偏移量的开销。