Apache Beam/Google 数据流 PubSub 到 BigQuery 管道:处理插入错误和意外重试行为
Apache Beam/Google Dataflow PubSub to BigQuery Pipeline: Handling Insert Errors and Unexpected Retry Behavior
我已经下载了一份Pub/Sub to BigQuery Dataflow template from Google's github repository. I am running it on my local machine using the direct-runner。
在测试中,我确认如果在 UDF 处理或从 JSON 到 TableRow 的转换过程中发生错误,模板只会将失败写入 "deadletter" table。
我还希望通过将它们发送到单独的 TupleTag 中来更优雅地处理插入 BigQuery 时发生的故障,这样它们也可以发送到死信 table 或其他输出以供审查和加工。目前,在使用 dataflow-runner 执行时,这些错误只会写入 Stackdriver 日志,并会继续无限期地重试,直到问题得到解决。
问题一:在本地测试并发布格式与目标 table 的架构不匹配的消息时,插入重试 5 次,然后管道因 RuntimeException 以及从 Google 的 API 的 HTTP 响应返回的错误而崩溃。我相信此行为是在 BigQueryServices.Impl 内设置的:
private static final FluentBackoff INSERT_BACKOFF_FACTORY =
FluentBackoff.DEFAULT.withInitialBackoff(Duration.millis(200)).withMaxRetries(5);
但是,基于 Google's documentation、
"When running in streaming mode, a bundle including a failing item
will be retried indefinitely, which may cause your pipeline to
permanently stall."
作为 Beam 的 Pub/Sub.IO、
create and consume unbounded PCollections
我的印象是,从 Pub/Sub 读取时应该默认启用流式传输模式。我什至在对 writeTableRows() 的调用中添加了 Streaming_Inserts 方法,它并没有影响这种行为。
.apply(
"WriteSuccessfulRecords",
BigQueryIO.writeTableRows()
.withMethod(Method.STREAMING_INSERTS)
- 这种行为是否在某种程度上受到我是哪个跑步者的影响
使用?如果不是,那我的理解错在哪里?
问题二:
- 使用 BigQueryIO.write vs BigQueryIO.writeTableRows 时性能有差异吗?
我问是因为我不知道如何在不创建我自己的静态 class 的情况下捕获与插入相关的错误,该静态 class 覆盖扩展方法并使用 ParDo 和 DoFn,我可以在其中添加自己的自定义逻辑为成功记录和失败记录创建单独的 TupleTags,类似于在 JavascriptTextTransformer 中为 FailsafeJavascriptUdf 所做的。
更新:
public static PipelineResult run(DirectOptions options) {
options.setRunner(DirectRunner.class);
Pipeline pipeline = Pipeline.create(options);
// Register the coder for pipeline
FailsafeElementCoder<PubsubMessage, String> coder =
FailsafeElementCoder.of(PubsubMessageWithAttributesCoder.of(), StringUtf8Coder.of());
CoderRegistry coderRegistry = pipeline.getCoderRegistry();
coderRegistry.registerCoderForType(coder.getEncodedTypeDescriptor(), coder);
PCollectionTuple transformOut =
pipeline
//Step #1: Read messages in from Pub/Sub
.apply(
"ReadPubsubMessages",
PubsubIO.readMessagesWithAttributes().fromTopic(options.getInputTopic()))
//Step #2: Transform the PubsubMessages into TableRows
.apply("ConvertMessageToTableRow", new PubsubMessageToTableRow(options));
WriteResult writeResult = null;
try {
writeResult =
transformOut
.get(TRANSFORM_OUT)
.apply(
"WriteSuccessfulRecords",
BigQueryIO.writeTableRows()
.withMethod(Method.STREAMING_INSERTS)
.withoutValidation()
.withCreateDisposition(CreateDisposition.CREATE_NEVER)
.withWriteDisposition(WriteDisposition.WRITE_APPEND)
.withFailedInsertRetryPolicy(InsertRetryPolicy.retryTransientErrors())
.to("myproject:MyDataSet.MyTable"));
} catch (Exception e) {
System.out.print("Cause of the Standard Insert Failure is: ");
System.out.print(e.getCause());
}
try {
writeResult
.getFailedInserts()
.apply(
"WriteFailedInsertsToDeadLetter",
BigQueryIO.writeTableRows()
.to(options.getOutputDeadletterTable())
.withCreateDisposition(CreateDisposition.CREATE_IF_NEEDED)
.withWriteDisposition(WriteDisposition.WRITE_APPEND));
} catch (Exception e) {
System.out.print("Cause of the Error Insert Failure is: ");
System.out.print(e.getCause());
}
PCollectionList.of(transformOut.get(UDF_DEADLETTER_OUT))
.and(transformOut.get(TRANSFORM_DEADLETTER_OUT))
.apply("Flatten", Flatten.pCollections())
.apply(
"WriteFailedRecords",
WritePubsubMessageErrors.newBuilder()
.setErrorRecordsTable(
maybeUseDefaultDeadletterTable(
options.getOutputDeadletterTable(),
options.getOutputTableSpec(),
DEFAULT_DEADLETTER_TABLE_SUFFIX))
.setErrorRecordsTableSchema(getDeadletterTableSchemaJson())
.build());
return pipeline.run();
}
错误:
Cause of the Error Insert Failure is: null[WARNING]
java.lang.NullPointerException: Outputs for non-root node WriteFailedInsertsToDeadLetter are null
at org.apache.beam.repackaged.beam_sdks_java_core.com.google.common.base.Preconditions.checkNotNull(Preconditions.java:864)
at org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:672)
at org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:660)
at org.apache.beam.sdk.runners.TransformHierarchy$Node.access0(TransformHierarchy.java:311)
at org.apache.beam.sdk.runners.TransformHierarchy.visit(TransformHierarchy.java:245)
at org.apache.beam.sdk.Pipeline.traverseTopologically(Pipeline.java:458)
at org.apache.beam.sdk.Pipeline.validate(Pipeline.java:575)
at org.apache.beam.sdk.Pipeline.run(Pipeline.java:310)
at org.apache.beam.sdk.Pipeline.run(Pipeline.java:297)
at com.google.cloud.teleport.templates.PubSubToBigQuery.run(PubSubToBigQuery.java:312)
at com.google.cloud.teleport.templates.PubSubToBigQuery.main(PubSubToBigQuery.java:186)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.codehaus.mojo.exec.ExecJavaMojo.run(ExecJavaMojo.java:282)
at java.lang.Thread.run(Thread.java:748)
在最新版本的 Beam 中,BigQueryIO.Write transform returns back a WriteResult 对象使您能够检索未能输出到 BigQuery 的 TableRows 的 PCollection。使用它,您可以轻松检索失败,将它们格式化为死信输出的结构,然后将记录重新提交到 BigQuery。这样就无需单独 class 来管理成功和失败的记录。
下面是您的管道的示例。
// Attempt to write the table rows to the output table.
WriteResult writeResult =
pipeline.apply(
"WriteRecordsToBigQuery",
BigQueryIO.writeTableRows()
.to(options.getOutputTable())
.withCreateDisposition(CreateDisposition.CREATE_NEVER)
.withWriteDisposition(WriteDisposition.WRITE_APPEND)
.withFailedInsertRetryPolicy(InsertRetryPolicy.retryTransientErrors()));
/*
* 1) Get the failed inserts
* 2) Transform to the deadletter table format.
* 3) Output to the deadletter table.
*/
writeResult
.getFailedInserts()
.apply("FormatFailedInserts", ParDo.of(new FailedInsertFormatter()))
.apply(
"WriteFailedInsertsToDeadletter",
BigQueryIO.writeTableRows()
.to(options.getDeadletterTable())
.withCreateDisposition(CreateDisposition.CREATE_NEVER)
.withWriteDisposition(WriteDisposition.WRITE_APPEND));
另外,回答你的问题:
- 根据梁docs,你必须设置
streaming
true
的 DirectRunner 选项。
- 应该没有
性能差异。无论哪种情况,您都需要将
输入记录到 TableRow 个对象。应该没什么区别
如果您事先在 ParDo 中或在可序列化的
使用 BigQueryIO.Write.withFormatFunction.
的函数
我已经下载了一份Pub/Sub to BigQuery Dataflow template from Google's github repository. I am running it on my local machine using the direct-runner。
在测试中,我确认如果在 UDF 处理或从 JSON 到 TableRow 的转换过程中发生错误,模板只会将失败写入 "deadletter" table。
我还希望通过将它们发送到单独的 TupleTag 中来更优雅地处理插入 BigQuery 时发生的故障,这样它们也可以发送到死信 table 或其他输出以供审查和加工。目前,在使用 dataflow-runner 执行时,这些错误只会写入 Stackdriver 日志,并会继续无限期地重试,直到问题得到解决。
问题一:在本地测试并发布格式与目标 table 的架构不匹配的消息时,插入重试 5 次,然后管道因 RuntimeException 以及从 Google 的 API 的 HTTP 响应返回的错误而崩溃。我相信此行为是在 BigQueryServices.Impl 内设置的:
private static final FluentBackoff INSERT_BACKOFF_FACTORY =
FluentBackoff.DEFAULT.withInitialBackoff(Duration.millis(200)).withMaxRetries(5);
但是,基于 Google's documentation、
"When running in streaming mode, a bundle including a failing item will be retried indefinitely, which may cause your pipeline to permanently stall."
作为 Beam 的 Pub/Sub.IO、
create and consume unbounded PCollections
我的印象是,从 Pub/Sub 读取时应该默认启用流式传输模式。我什至在对 writeTableRows() 的调用中添加了 Streaming_Inserts 方法,它并没有影响这种行为。
.apply(
"WriteSuccessfulRecords",
BigQueryIO.writeTableRows()
.withMethod(Method.STREAMING_INSERTS)
- 这种行为是否在某种程度上受到我是哪个跑步者的影响 使用?如果不是,那我的理解错在哪里?
问题二:
- 使用 BigQueryIO.write vs BigQueryIO.writeTableRows 时性能有差异吗?
我问是因为我不知道如何在不创建我自己的静态 class 的情况下捕获与插入相关的错误,该静态 class 覆盖扩展方法并使用 ParDo 和 DoFn,我可以在其中添加自己的自定义逻辑为成功记录和失败记录创建单独的 TupleTags,类似于在 JavascriptTextTransformer 中为 FailsafeJavascriptUdf 所做的。
更新:
public static PipelineResult run(DirectOptions options) {
options.setRunner(DirectRunner.class);
Pipeline pipeline = Pipeline.create(options);
// Register the coder for pipeline
FailsafeElementCoder<PubsubMessage, String> coder =
FailsafeElementCoder.of(PubsubMessageWithAttributesCoder.of(), StringUtf8Coder.of());
CoderRegistry coderRegistry = pipeline.getCoderRegistry();
coderRegistry.registerCoderForType(coder.getEncodedTypeDescriptor(), coder);
PCollectionTuple transformOut =
pipeline
//Step #1: Read messages in from Pub/Sub
.apply(
"ReadPubsubMessages",
PubsubIO.readMessagesWithAttributes().fromTopic(options.getInputTopic()))
//Step #2: Transform the PubsubMessages into TableRows
.apply("ConvertMessageToTableRow", new PubsubMessageToTableRow(options));
WriteResult writeResult = null;
try {
writeResult =
transformOut
.get(TRANSFORM_OUT)
.apply(
"WriteSuccessfulRecords",
BigQueryIO.writeTableRows()
.withMethod(Method.STREAMING_INSERTS)
.withoutValidation()
.withCreateDisposition(CreateDisposition.CREATE_NEVER)
.withWriteDisposition(WriteDisposition.WRITE_APPEND)
.withFailedInsertRetryPolicy(InsertRetryPolicy.retryTransientErrors())
.to("myproject:MyDataSet.MyTable"));
} catch (Exception e) {
System.out.print("Cause of the Standard Insert Failure is: ");
System.out.print(e.getCause());
}
try {
writeResult
.getFailedInserts()
.apply(
"WriteFailedInsertsToDeadLetter",
BigQueryIO.writeTableRows()
.to(options.getOutputDeadletterTable())
.withCreateDisposition(CreateDisposition.CREATE_IF_NEEDED)
.withWriteDisposition(WriteDisposition.WRITE_APPEND));
} catch (Exception e) {
System.out.print("Cause of the Error Insert Failure is: ");
System.out.print(e.getCause());
}
PCollectionList.of(transformOut.get(UDF_DEADLETTER_OUT))
.and(transformOut.get(TRANSFORM_DEADLETTER_OUT))
.apply("Flatten", Flatten.pCollections())
.apply(
"WriteFailedRecords",
WritePubsubMessageErrors.newBuilder()
.setErrorRecordsTable(
maybeUseDefaultDeadletterTable(
options.getOutputDeadletterTable(),
options.getOutputTableSpec(),
DEFAULT_DEADLETTER_TABLE_SUFFIX))
.setErrorRecordsTableSchema(getDeadletterTableSchemaJson())
.build());
return pipeline.run();
}
错误:
Cause of the Error Insert Failure is: null[WARNING]
java.lang.NullPointerException: Outputs for non-root node WriteFailedInsertsToDeadLetter are null
at org.apache.beam.repackaged.beam_sdks_java_core.com.google.common.base.Preconditions.checkNotNull(Preconditions.java:864)
at org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:672)
at org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:660)
at org.apache.beam.sdk.runners.TransformHierarchy$Node.access0(TransformHierarchy.java:311)
at org.apache.beam.sdk.runners.TransformHierarchy.visit(TransformHierarchy.java:245)
at org.apache.beam.sdk.Pipeline.traverseTopologically(Pipeline.java:458)
at org.apache.beam.sdk.Pipeline.validate(Pipeline.java:575)
at org.apache.beam.sdk.Pipeline.run(Pipeline.java:310)
at org.apache.beam.sdk.Pipeline.run(Pipeline.java:297)
at com.google.cloud.teleport.templates.PubSubToBigQuery.run(PubSubToBigQuery.java:312)
at com.google.cloud.teleport.templates.PubSubToBigQuery.main(PubSubToBigQuery.java:186)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.codehaus.mojo.exec.ExecJavaMojo.run(ExecJavaMojo.java:282)
at java.lang.Thread.run(Thread.java:748)
在最新版本的 Beam 中,BigQueryIO.Write transform returns back a WriteResult 对象使您能够检索未能输出到 BigQuery 的 TableRows 的 PCollection。使用它,您可以轻松检索失败,将它们格式化为死信输出的结构,然后将记录重新提交到 BigQuery。这样就无需单独 class 来管理成功和失败的记录。
下面是您的管道的示例。
// Attempt to write the table rows to the output table.
WriteResult writeResult =
pipeline.apply(
"WriteRecordsToBigQuery",
BigQueryIO.writeTableRows()
.to(options.getOutputTable())
.withCreateDisposition(CreateDisposition.CREATE_NEVER)
.withWriteDisposition(WriteDisposition.WRITE_APPEND)
.withFailedInsertRetryPolicy(InsertRetryPolicy.retryTransientErrors()));
/*
* 1) Get the failed inserts
* 2) Transform to the deadletter table format.
* 3) Output to the deadletter table.
*/
writeResult
.getFailedInserts()
.apply("FormatFailedInserts", ParDo.of(new FailedInsertFormatter()))
.apply(
"WriteFailedInsertsToDeadletter",
BigQueryIO.writeTableRows()
.to(options.getDeadletterTable())
.withCreateDisposition(CreateDisposition.CREATE_NEVER)
.withWriteDisposition(WriteDisposition.WRITE_APPEND));
另外,回答你的问题:
- 根据梁docs,你必须设置
streaming
true
的 DirectRunner 选项。 - 应该没有 性能差异。无论哪种情况,您都需要将 输入记录到 TableRow 个对象。应该没什么区别 如果您事先在 ParDo 中或在可序列化的 使用 BigQueryIO.Write.withFormatFunction. 的函数