如何处理 DynamoDB boto3 的 batchwriteitem(在 lambda 中调用)跳过一些项目(不上传到 dynamodb)?

How to deal with DynamoDB boto3's batchwriteitem (called in lambda) skipping some items (not uploading to dynamodb)?

这是一个正在使用 AWS 开发的项目。

我已经使用 CloudWatch 中的 cron 表达式安排了我的 lambda 函数。该函数将每天上传项目到 DynamoDB。

尽管有唯一的主键,但有些项目没有上传到 Dynamodb。有时会跳过连续的项目,有时会跳过主键略微相似的项目。通常,跳过的项目数在 20 个以下。

当我再次手动 运行 lambda 函数时,它完全有效。想知道这背后的原因,可能还有 solution.Thanks!

BatchWriteItem documentation 解释说,如果数据库遇到问题 - 最明显的是如果您的请求率超过了您的配置容量 - BatchWriteItem 很可能只能成功写入 一些 批次中的项目。 未写入 的项目列表将在响应的 UnprocessedItems 属性中返回,您应该再次尝试写入相同的未处理项目:

If any requested operations fail because the table's provisioned throughput is exceeded or an internal processing failure occurs, the failed operations are returned in the UnprocessedItems response parameter. You can investigate and optionally resend the requests. Typically, you would call BatchWriteItem in a loop. Each iteration would check for unprocessed items and submit a new BatchWriteItem request with those unprocessed items until all items have been processed.

If none of the items can be processed due to insufficient provisioned throughput on all of the tables in the request, then BatchWriteItem returns a ProvisionedThroughputExceededException.

If DynamoDB returns any unprocessed items, you should retry the batch operation on those items. However, we strongly recommend that you use an exponential backoff algorithm. If you retry the batch operation immediately, the underlying read or write requests can still fail due to throttling on the individual tables. If you delay the batch operation using exponential backoff, the individual requests in the batch are much more likely to succeed.