Flink 尝试从删除的目录中恢复检查点
Flink try to recover checkpoint from deleted directories
清理用于存储旧文件(已访问超过一个月的文件)检查点的s3存储桶后,在重新启动或从实际检查点恢复时,由于作业的某些进程未启动一些丢失的旧文件
作业运行良好并保存实际检查点(保存路径 s3://flink-checkpoints/check/af8b0712ae0c1f20d2226b86e6bddb60/chk-100274)
2022-04-24 03:58:32.892 Triggering checkpoint 100273 @ 1653353912890 for job af8b0712ae0c1f20d2226b86e6bddb60.
2022-04-24 03:58:55.317 Completed checkpoint 100273 for job af8b0712ae0c1f20d2226b86e6bddb60 (679053131 bytes in 22090 ms).
2022-04-24 04:03:32.892 Triggering checkpoint 100274 @ 1653354212890 for job af8b0712ae0c1f20d2226b86e6bddb60.
2022-04-24 04:03:35.844 Completed checkpoint 100274 for job af8b0712ae0c1f20d2226b86e6bddb60 (9606712 bytes in 2494 ms).
关闭一个任务管理器并重新启动作业后
2022-04-24 04:04:40.936 Job test-job (af8b0712ae0c1f20d2226b86e6bddb60) switched from state RUNNING to RESTARTING.
2022-04-24 04:05:14.150 Job test-job (af8b0712ae0c1f20d2226b86e6bddb60) switched from state RESTARTING to RUNNING.
2022-04-24 04:05:14.198 Restoring job af8b0712ae0c1f20d2226b86e6bddb60 from latest valid checkpoint: Checkpoint 100274 @ 1653354212890 for af8b0712ae0c1f20d2226b86e6bddb60.
一段时间后作业失败,因为某些进程无法恢复状态
2022-04-24 04:05:17.095 Job test-job (af8b0712ae0c1f20d2226b86e6bddb60) switched from state RUNNING to RESTARTING.
2022-04-24 04:05:17.093 Process first events -> Sink: Sink to test-job (5/10) (4f9089b1015540eb6e13afe4c07fa97b) switched from RUNNING to FAILED.
java.lang.Exception: Exception while creating StreamOperatorStateContext.
Caused by: org.apache.flink.util.FlinkException: Could not restore keyed state backend for KeyedProcessOperator_f1d5710fb330fd579d15b292e305802c_(5/10) from any of the 1 provided restore options.
Caused by: org.apache.flink.runtime.state.BackendBuildingException: Caught unexpected exception.
Caused by: org.apache.flink.util.FlinkRuntimeException: Failed to download data for state handles.
Caused by: com.facebook.presto.hive.s3.PrestoS3FileSystem$UnrecoverableS3OperationException: com.amazonaws.services.s3.model.AmazonS3Exception: null (Service: Amazon S3; Status Code: 404; Error Code: NoSuchKey; Request ID: tx0000000000000f0652d11-00628c2f4a-51f03da-default; S3 Extended Request ID: 51f03da-default-default), S3 Extended Request ID: 51f03da-default-default (Path: s3://flink-checkpoints/check/e3d82336005fc40be9af536938716199/shared/64452a30-c8a0-454f-8164-34d9e70142e0)
Caused by: com.amazonaws.services.s3.model.AmazonS3Exception: null (Service: Amazon S3; Status Code: 404; Error Code: NoSuchKey; Request ID: tx0000000000000f0652d11-00628c2f4a-51f03da-default; S3 Extended Request ID: 51f03da-default-default)
2022-04-24 04:05:17.095 Job test-job (af8b0712ae0c1f20d2226b86e6bddb60) switched from state RUNNING to RESTARTING.
如果我完全取消作业并开始一个新作业并将保存点设置为最后一个检查点的路径,我会得到同样的错误。
为什么在使用 af8b0712ae0c1f20d2226b86e6bddb60 文件夹中的检查点时作业会尝试从 e3d82336005fc40be9af536938716199 文件夹中获取一些文件,以及从存储中清除旧检查点的规则是什么?
更新
我发现 flink 在 chk-*/_metadata 文件中为所有 TaskManager 的 rocksdb 文件保存了 s3 路径。
这是一个很长一段时间都比较模糊的问题,最近在 Flink 1.15 中得到了解决。我建议阅读 https://flink.apache.org/news/2022/05/05/1.15-announcement.html 部分 'Clarification of checkpoint and savepoint semantics',包括比较检查点和保存点的部分。
您遇到的行为取决于您的检查点设置(对齐与未对齐)。
默认情况下,取消作业会删除旧的检查点。有一个配置标志来控制它execution.checkpointing.externalized-checkpoint-retention
。正如 Martijn 所提到的,通常您会求助于保存点来控制作业升级/重启。
我发现 flink 将所有 TaskManager 的 rocksdb 文件的 s3 路径保存在 chk-*/_metadata 文件中。
清理用于存储旧文件(已访问超过一个月的文件)检查点的s3存储桶后,在重新启动或从实际检查点恢复时,由于作业的某些进程未启动一些丢失的旧文件
作业运行良好并保存实际检查点(保存路径 s3://flink-checkpoints/check/af8b0712ae0c1f20d2226b86e6bddb60/chk-100274)
2022-04-24 03:58:32.892 Triggering checkpoint 100273 @ 1653353912890 for job af8b0712ae0c1f20d2226b86e6bddb60.
2022-04-24 03:58:55.317 Completed checkpoint 100273 for job af8b0712ae0c1f20d2226b86e6bddb60 (679053131 bytes in 22090 ms).
2022-04-24 04:03:32.892 Triggering checkpoint 100274 @ 1653354212890 for job af8b0712ae0c1f20d2226b86e6bddb60.
2022-04-24 04:03:35.844 Completed checkpoint 100274 for job af8b0712ae0c1f20d2226b86e6bddb60 (9606712 bytes in 2494 ms).
关闭一个任务管理器并重新启动作业后
2022-04-24 04:04:40.936 Job test-job (af8b0712ae0c1f20d2226b86e6bddb60) switched from state RUNNING to RESTARTING.
2022-04-24 04:05:14.150 Job test-job (af8b0712ae0c1f20d2226b86e6bddb60) switched from state RESTARTING to RUNNING.
2022-04-24 04:05:14.198 Restoring job af8b0712ae0c1f20d2226b86e6bddb60 from latest valid checkpoint: Checkpoint 100274 @ 1653354212890 for af8b0712ae0c1f20d2226b86e6bddb60.
一段时间后作业失败,因为某些进程无法恢复状态
2022-04-24 04:05:17.095 Job test-job (af8b0712ae0c1f20d2226b86e6bddb60) switched from state RUNNING to RESTARTING.
2022-04-24 04:05:17.093 Process first events -> Sink: Sink to test-job (5/10) (4f9089b1015540eb6e13afe4c07fa97b) switched from RUNNING to FAILED.
java.lang.Exception: Exception while creating StreamOperatorStateContext.
Caused by: org.apache.flink.util.FlinkException: Could not restore keyed state backend for KeyedProcessOperator_f1d5710fb330fd579d15b292e305802c_(5/10) from any of the 1 provided restore options.
Caused by: org.apache.flink.runtime.state.BackendBuildingException: Caught unexpected exception.
Caused by: org.apache.flink.util.FlinkRuntimeException: Failed to download data for state handles.
Caused by: com.facebook.presto.hive.s3.PrestoS3FileSystem$UnrecoverableS3OperationException: com.amazonaws.services.s3.model.AmazonS3Exception: null (Service: Amazon S3; Status Code: 404; Error Code: NoSuchKey; Request ID: tx0000000000000f0652d11-00628c2f4a-51f03da-default; S3 Extended Request ID: 51f03da-default-default), S3 Extended Request ID: 51f03da-default-default (Path: s3://flink-checkpoints/check/e3d82336005fc40be9af536938716199/shared/64452a30-c8a0-454f-8164-34d9e70142e0)
Caused by: com.amazonaws.services.s3.model.AmazonS3Exception: null (Service: Amazon S3; Status Code: 404; Error Code: NoSuchKey; Request ID: tx0000000000000f0652d11-00628c2f4a-51f03da-default; S3 Extended Request ID: 51f03da-default-default)
2022-04-24 04:05:17.095 Job test-job (af8b0712ae0c1f20d2226b86e6bddb60) switched from state RUNNING to RESTARTING.
如果我完全取消作业并开始一个新作业并将保存点设置为最后一个检查点的路径,我会得到同样的错误。
为什么在使用 af8b0712ae0c1f20d2226b86e6bddb60 文件夹中的检查点时作业会尝试从 e3d82336005fc40be9af536938716199 文件夹中获取一些文件,以及从存储中清除旧检查点的规则是什么?
更新 我发现 flink 在 chk-*/_metadata 文件中为所有 TaskManager 的 rocksdb 文件保存了 s3 路径。
这是一个很长一段时间都比较模糊的问题,最近在 Flink 1.15 中得到了解决。我建议阅读 https://flink.apache.org/news/2022/05/05/1.15-announcement.html 部分 'Clarification of checkpoint and savepoint semantics',包括比较检查点和保存点的部分。
您遇到的行为取决于您的检查点设置(对齐与未对齐)。
默认情况下,取消作业会删除旧的检查点。有一个配置标志来控制它execution.checkpointing.externalized-checkpoint-retention
。正如 Martijn 所提到的,通常您会求助于保存点来控制作业升级/重启。
我发现 flink 将所有 TaskManager 的 rocksdb 文件的 s3 路径保存在 chk-*/_metadata 文件中。