exactly_once 中止事务后状态存储内容无效,任务迁移后备用副本

Invalid state store content after aborted transaction with exactly_once and standby replicas after task migration

我们有一个基于 Kafka Streams State Stores 的简单命令 ID 重复数据删除机制。它将过去一小时的命令 ID 存储在 persistentWindowStore 中。如果稍后在该拓扑中抛出异常,我们会遇到存储问题。 我们 运行 3 个节点使用 docker,每个节点都为此特定的 Streams 应用程序设置了多个线程。流程如下:

2020-04-16 07:26:19.144 DEBUG 1 --- [-StreamThread-1] c.g.f.c.s.validation.CommandIdValidator  : CommandId: mnl_cmd_1cd3a7c9-56ef-4c08-a312-d9b69681726a is not a duplicate.
2020-04-16 07:26:19.144  INFO 1 --- [-StreamThread-1] c.g.f.c.s.v.CommandIdValidationUtils     : Putting commandId in store: mnl_cmd_1cd3a7c9-56ef-4c08-a312-d9b69681726a 1587013200000
2020-04-16 07:26:19.153  INFO 1 --- [-StreamThread-1] g.f.l.s.v.XXXXValidationHandler : 
Exception in thread "XXXXXXProcessor-53816dfa-6ebf-47ad-864c-0fbcb61dc51c-StreamThread-1" java.lang.RuntimeException: Unexpected exponent value: -5
Failed to process stream task 1_2 due to the following error
Shutting down
Unclean shutdown of all active tasks
Flushing all stores registered in the state manager

拓扑中的第一个节点分析这是否是重复的,如果不是则将其放入状态存储中。下一个拓扑节点抛出异常。结果,事务中止,不提交偏移量。我仔细检查了更新日志主题 - 相关消息未提交。

2020-04-16 07:26:43.399 DEBUG 1 --- [-StreamThread-2] c.g.f.c.s.validation.CommandIdValidator  : CommandId: mnl_cmd_1cd3a7c9-56ef-4c08-a312-d9b69681726a is not a duplicate.
2020-04-16 07:26:43.400  INFO 1 --- [-StreamThread-2] c.g.f.c.s.v.CommandIdValidationUtils     : Putting commandId in store: mnl_cmd_1cd3a7c9-56ef-4c08-a312-d9b69681726a 1587013200000
2020-04-16 07:26:43.425  INFO 1 --- [-StreamThread-2] g.f.l.s.v.XXXXValidationHandler : 
Exception in thread "XXXXXXProcessor-d069f877-90ff-4ecf-bfbf-3b2bd0e20eac-StreamThread-2" java.lang.RuntimeException: XXXX

同样的故事,交易中止。

2020-04-16 07:26:55.655  INFO 1 --- [-StreamThread-2] c.g.f.c.s.v.CommandIdValidationUtils     : next: KeyValue(1587013200000, mnl_cmd_1cd3a7c9-56ef-4c08-a312-d9b69681726a)
2020-04-16 07:26:55.655  WARN 1 --- [-StreamThread-2] c.g.f.c.s.validation.CommandIdValidator  : Command duplicate detected. Command id mnl_cmd_1cd3a7c9-56ef-4c08-a312-d9b69681726a

我们分析了任务恢复日志,一切似乎都很正常,中止的交易被跳过了: Skipping aborted record batch from partition

但是,此任务的 rocks 数据库包含命令 id:

docker cp 134d4f6f82cf:/tmp/kafka-streams/XXXXXXProcessor/1_0/COMMAND_ID_STORE/COMMAND_ID_STORE.1587013200000 .
ldb --db=COMMAND_ID_STORE.1587013200000 scan
mnl_cmd_1cd3a7c9-56ef-4c08-a312-d9b69681726a : mnl_cmd_1cd3a7c9-56ef-4c08-a312-d9b69681726a
mnl_cmd_c51cbb71-0274-44c6-b016-976add743fe6 : mnl_cmd_c51cbb71-0274-44c6-b016-976add743fe6

我们的理解是,即使事务在节点 1 (A) 上中止,rocks db 条目之后也没有被清理,并且在 (C) 期间另一个线程从已经存在的 rocks 中获取数据db(毕竟磁盘路径是和任务连接​​的,不是线程)。我们不确定在任务恢复期间或任务失败期间持久存储会发生什么。根据我们的说法,这种情况破坏了 exactly-once 语义,因为状态存储与更新日志主题不同步。

根据我们的 side/wrong 假设,这是一个错误的配置吗?错误?我们尝试将存储更改为 inMemoryWindowStore 并且它似乎可以解决问题。但我们想了解这种行为,因为我们严重依赖状态存储。

我们使用 Java 11,kafka 客户端 4.1 和 spring-kafka 2.4.5。我们回滚到kafka clients 2.3.1,问题依旧。

EDIT 我们执行了更多配置更改测试,将 num.standby.replicas = 1 更改为 num.standby.replicas = 0 后问题消失了。从日志来看,似乎已正确创建备用状态,即未考虑中止的消息,但磁盘上的状态仍然被破坏,这很容易重现。 再来一个分析: - 节点 A Thread-2 处理消息,崩溃 - 节点 B Thread-2 是备用副本,接管,崩溃 - 节点 B Thread-1 接管,检测到重复项

编辑 2 与检查点文件相关的详细日志。首先,发送一个有效命令,然后发送一个无效命令(因此,更新日志主题不为空)。

NODE_3 2020-04-15 21:06:14.470 TRACE 1 --- [-StreamThread-2] o.a.k.s.p.i.ProcessorStateManager        : stream-thread [XXXXProcessor-94f7be8e-beec-411f-b4ec-9031527bccdf-StreamThread-2] task [1_2] Checkpointable offsets read from checkpoint: {}
NODE_3 2020-04-15 21:06:19.413 TRACE 1 --- [-StreamThread-2] o.a.k.s.p.i.ProcessorStateManager        : stream-thread [XXXXProcessor-94f7be8e-beec-411f-b4ec-9031527bccdf-StreamThread-2] task [1_2] Restoring state store COMMAND_ID_STORE from changelog topic XXXXProcessor-COMMAND_ID_STORE-changelog at checkpoint null
NODE_3 2020-04-15 21:06:28.470 TRACE 1 --- [-StreamThread-1] o.a.k.s.p.i.ProcessorStateManager        : stream-thread [XXXXProcessor-94f7be8e-beec-411f-b4ec-9031527bccdf-StreamThread-1] standby-task [1_2] Checkpointable offsets read from checkpoint: {}
NODE_3 2020-04-15 21:06:29.634 TRACE 1 --- [-StreamThread-2] o.a.k.s.s.internals.OffsetCheckpoint     : Writing tmp checkpoint file /tmp/kafka-streams/XXXXProcessor/1_2/.checkpoint.tmp
NODE_3 2020-04-15 21:06:29.640 TRACE 1 --- [-StreamThread-2] o.a.k.s.s.internals.OffsetCheckpoint     : Swapping tmp checkpoint file /tmp/kafka-streams/XXXXProcessor/1_2/.checkpoint.tmp /tmp/kafka-streams/XXXXProcessor/1_2/.checkpoint
NODE_3 2020-04-15 21:11:15.909 TRACE 1 --- [-StreamThread-1] o.a.k.s.s.internals.OffsetCheckpoint     : Writing tmp checkpoint file /tmp/kafka-streams/XXXXProcessor/1_2/.checkpoint.tmp
NODE_3 2020-04-15 21:11:15.912 TRACE 1 --- [-StreamThread-1] o.a.k.s.s.internals.OffsetCheckpoint     : Swapping tmp checkpoint file /tmp/kafka-streams/XXXXProcessor/1_2/.checkpoint.tmp /tmp/kafka-streams/XXXXProcessor/1_2/.checkpoint

NODE_1 log1:2020-04-15 21:11:33.942 DEBUG 1 --- [-StreamThread-2] c.g.f.c.s.validation.CommandIdValidator  : CommandId: mnl_cmd_9f1752da-45b7-4ef7-9ef8-209d826530bc is not a duplicate.

NODE_3 2020-04-15 21:11:47.195 TRACE 1 --- [-StreamThread-1] o.a.k.s.s.internals.OffsetCheckpoint     : Writing tmp checkpoint file /tmp/kafka-streams/XXXXProcessor/1_2/.checkpoint.tmp
NODE_3 2020-04-15 21:11:47.233 TRACE 1 --- [-StreamThread-1] o.a.k.s.s.internals.OffsetCheckpoint     : Swapping tmp checkpoint file /tmp/kafka-streams/XXXXProcessor/1_2/.checkpoint.tmp /tmp/kafka-streams/XXXXProcessor/1_2/.checkpoint
NODE_3 2020-04-15 21:11:49.075 TRACE 1 --- [-StreamThread-2] o.a.k.s.p.i.ProcessorStateManager        : stream-thread [XXXXProcessor-94f7be8e-beec-411f-b4ec-9031527bccdf-StreamThread-2] task [1_2] Restoring state store COMMAND_ID_STORE from changelog topic XXXXProcessor-COMMAND_ID_STORE-changelog at checkpoint 1
NODE_3 2020-04-15 21:11:49.436 TRACE 1 --- [-StreamThread-2] o.a.k.s.p.i.StoreChangelogReader         : stream-thread [XXXXProcessor-94f7be8e-beec-411f-b4ec-9031527bccdf-StreamThread-2] Found checkpoint 1 from changelog XXXXProcessor-COMMAND_ID_STORE-changelog-2 for store COMMAND_ID_STORE.

NODE_3 2020-04-15 21:11:52.023 DEBUG 1 --- [-StreamThread-2] c.g.f.c.s.validation.CommandIdValidator  : CommandId: mnl_cmd_9f1752da-45b7-4ef7-9ef8-209d826530bc is not a duplicate.
NODE_3 2020-04-15 21:11:53.683 ERROR 1 --- [-StreamThread-2] o.a.k.s.p.i.AssignedStreamsTasks         : stream-thread [XXXXProcessor-94f7be8e-beec-411f-b4ec-9031527bccdf-StreamThread-2] Failed to process stream task 1_2 due to the following error: java.lang.RuntimeException

NODE_3 2020-04-15 21:12:05.346 TRACE 1 --- [-StreamThread-1] o.a.k.s.p.i.ProcessorStateManager        : stream-thread [XXXXProcessor-94f7be8e-beec-411f-b4ec-9031527bccdf-StreamThread-1] task [1_2] Restoring state store COMMAND_ID_STORE from changelog topic XXXXProcessor-COMMAND_ID_STORE-changelog at checkpoint 1
NODE_3 2020-04-15 21:12:05.562 TRACE 1 --- [-StreamThread-1] o.a.k.s.p.i.StoreChangelogReader         : stream-thread [XXXXProcessor-94f7be8e-beec-411f-b4ec-9031527bccdf-StreamThread-1] Found checkpoint 1 from changelog XXXXProcessor-COMMAND_ID_STORE-changelog-2 for store COMMAND_ID_STORE.

NODE_3 2020-04-15 21:12:06.424  WARN 1 --- [-StreamThread-1] c.g.f.c.s.validation.CommandIdValidator  : Command duplicate detected. Command id mnl_cmd_9f1752da-45b7-4ef7-9ef8-209d826530bc

谢谢你的帮助。

在COMMAND_ID_STORE上运行的代码(类似于Kafka Streams的例子)

添加状态存储

    WindowBytesStoreSupplier storeSupplier = Stores.persistentWindowStore(storeName,
        RETENTION,
        WINDOW_SIZE,
        true);
    StoreBuilder<WindowStore<String, String>> storeBuilder = Stores
        .windowStoreBuilder(storeSupplier, Serdes.String(), Serdes.String());
    streamsBuilder.addStateStore(storeBuilder);

正在获取

public static boolean isCommandIdDuplicate(String commandId, WindowStore<String, String> commandIdStore) {
    long previousHourStart = DateTimeUtils.previousHourStart();

    final WindowStoreIterator<String> storeIterator = commandIdStore
        .fetch(commandId, previousHourStart, DateTimeUtils.currentTimeMillis());
    final boolean hasNext = storeIterator.hasNext();
    log.info("hasNext: {}", hasNext);
    while (storeIterator.hasNext()) {
      log.info("next: {}", storeIterator.next());
    }
    storeIterator.close();
    return hasNext;

  }

    final long windowStartTimestamp = DateTimeUtils.thisHourStart();
    log.info("Putting commandId in store: {} {}", commandId, windowStartTimestamp);
    commandIdStore.put(commandId, commandId, windowStartTimestamp);

变换

.transformValues(ValueTransformerSupplier<YYY,
                    ? extends XXX>) CommandIdValidator::new,
                COMMAND_ID_STORE);

变形金刚

@Slf4j
public class CommandIdValidator<T extends GeneratedMessageV3> implements
    ValueTransformer<T, XXX<T>> {

  WindowStore<String, String> commandIdStore;

  @Override
  public void init(ProcessorContext context) {
    commandIdStore = (WindowStore<String, String>) context.getStateStore(COMMAND_ID_STORE);
  }

  @Override
  public XXX<T> transform(T command) {

    log.debug("Validating commandId.");

    String commandId = getCommandId(command);

    if (CommandIdValidationUtils.isCommandIdDuplicate(commandId, commandIdStore)) {
      log.warn("Command duplicate detected. Command id " + commandId);
      return new XXX(command,
          XXX.duplicatedCommandId(commandId, command.getClass().getName()));
    } else {
      log.debug("CommandId: {} is not a duplicate.", commandId);
      CommandIdValidationUtils.putCommandIdInStore(commandId, commandIdStore);
      return XXX.successful(command);
    }
  }

  @Override
  public void close() {

  }

}

您所描述的听起来确实像是 Kafka Streams 中的错误。

We rolled back to kafka clients 2.3.1 and the problem persists.

来自哪个版本? 2.4.02.4.12.5.0 是最近发布的,你可以试试看问题是否仍然存在? (一些 EOS 错误已得到修复;我找不到与您描述的问题相匹配的工单……所以不确定错误是否仍然存在。)

EOS 应该如何工作:

对于本地 RocksDB 实例,Kafka Streams 还会将一个 "checkpoint" 文件写入本地磁盘,其中包含一些元数据。如果检查点文件存在,我们知道 RocksDB 是 "clean",即与 changelog 主题同步。如果检查点文件不存在,我们知道状态已损坏,整个 RocksDB 实例在启动时被清除,状态是从变更日志重建的。因此,检查点文件只应在干净的任务关闭(即重新平衡)期间写入,并且应在任务分配后和处理开始前删除。在处理过程中不应该有检查点文件,因此如果我们崩溃,我们知道存储已损坏并且我们从变更日志重建它。

一种可能是错误的理论如下:当第一个线程崩溃时(在实例 A 上),没有写入检查点文件,这是正确的。在第一次重新平衡之后,一个 StandbyTask 被分配给实例 A —— 这个 StandbyTask 应该清除本地状态存储(因为没有检查点文件)并且错误是这个擦除不会发生(也许你可以通过验证日志;清除应该被记录下来,或者文件系统上的一些 "create file" 时间戳帮助)。当实例 B 上的第二个线程崩溃时,将触发另一个重新平衡。 StandbyTask 错误地认为它处于干净状态并写入检查点文件。当活动任务在实例 A 的另一个线程上启动时,它只会采用损坏的状态,因为(不正确的)现有检查点文件表明状态是一致的。

  • 当您切换到内存存储时,问题自然消失,因为磁盘上没有本地状态,也没有检查点文件,因此状态总是从变更日志主题重建。
  • 当您禁用 StandbyTasks 时,在第二次重新平衡中不会写入任何检查点文件,因此当活动任务在实例 A 中的第二个线程上启动时,它现在会正确清除存储并从变更日志中重建。

也许您可以尝试验证这一点,也可以打开错误报告?

这已经证明是一个错误,并已在后续版本中修复: KAFKA-9891