GCP:设置从 Spanner 到 Big Query 的定期数据管道的最佳选择是什么
GCP: What is the best option to setup a periodic Data pipeline from Spanner to Big Query
任务:我们必须设置从 Spanner 到 Big Query 的记录的定期同步。我们的 Spanner 数据库具有关系 table 层次结构。
考虑了选项 我正在考虑使用数据流模板来设置此数据管道。
选项 1:使用 Dataflow 模板 'Cloud Spanner to Cloud Storage Text' 设置一个作业,然后使用 Dataflow 模板'Cloud Storage
文本到 BigQuery'。 Con:第一个模板仅适用于单个 table,我们有许多 table 要导出。
Option2:使用'Cloud Spanner to Cloud Storage Avro'模板导出整个数据库。 缺点:我只需要在数据库中导出选定的 table,我没有看到将 Avro 导入 Big Query 的模板。
问题:请提出设置此管道的最佳选择
目前没有 off-the-shelf 从 Cloud Spanner 到 BigQuery 的参数化直接导出。
为满足您的要求,自定义数据流作业 (spanner dataflow connector, dataflow templates) scheduled periodically (1, 2) would be the best bet. Incremental exports would require implementing change tracking in you database which can be done with commit timestamps.
对于 no-code 解决方案,您必须放宽要求并定期将所有表批量导出到 Cloud Storage 并定期将它们批量导入 BigQuery。您可以使用 periodic trigger of an export from Cloud Spanner to Cloud Storage and schedule a periodic import from Cloud Storage to BigQuery.
的组合
使用单个 Dataflow 管道在一个 shot/pass 中完成。这是我使用 Java SDK 编写的示例,可帮助您入门。它从 Spanner 读取,使用 ParDo
将其转换为 BigQuery TableRow
,然后在最后写入 BigQuery。在引擎盖下它使用的是 GCS,但作为用户,这一切都是从你那里抽象出来的。
package org.polleyg;
import com.google.api.services.bigquery.model.TableFieldSchema;
import com.google.api.services.bigquery.model.TableRow;
import com.google.api.services.bigquery.model.TableSchema;
import com.google.cloud.spanner.Struct;
import org.apache.beam.runners.dataflow.options.DataflowPipelineOptions;
import org.apache.beam.sdk.Pipeline;
import org.apache.beam.sdk.io.gcp.bigquery.BigQueryIO;
import org.apache.beam.sdk.io.gcp.spanner.SpannerIO;
import org.apache.beam.sdk.options.PipelineOptionsFactory;
import org.apache.beam.sdk.transforms.DoFn;
import org.apache.beam.sdk.transforms.ParDo;
import org.apache.beam.sdk.values.PCollection;
import java.util.ArrayList;
import java.util.List;
import static org.apache.beam.sdk.io.gcp.bigquery.BigQueryIO.Write.CreateDisposition.CREATE_IF_NEEDED;
import static org.apache.beam.sdk.io.gcp.bigquery.BigQueryIO.Write.WriteDisposition.WRITE_TRUNCATE;
/**
* Do some randomness
*/
public class TemplatePipeline {
public static void main(String[] args) {
PipelineOptionsFactory.register(DataflowPipelineOptions.class);
DataflowPipelineOptions options = PipelineOptionsFactory.fromArgs(args).withValidation().as(DataflowPipelineOptions.class);
Pipeline pipeline = Pipeline.create(options);
PCollection<Struct> records = pipeline.apply("read_from_spanner",
SpannerIO.read()
.withInstanceId("spanner-to-dataflow-to-bq")
.withDatabaseId("the-dude")
.withQuery("SELECT * FROM Singers"));
records.apply("convert-2-bq-row", ParDo.of(new DoFn<Struct, TableRow>() {
@ProcessElement
public void processElement(ProcessContext c) throws Exception {
TableRow row = new TableRow();
row.set("id", c.element().getLong("SingerId"));
row.set("first", c.element().getString("FirstName"));
row.set("last", c.element().getString("LastName"));
c.output(row);
}
})).apply("write-to-bq", BigQueryIO.writeTableRows()
.to(String.format("%s:spanner_to_bigquery.singers", options.getProject()))
.withCreateDisposition(CREATE_IF_NEEDED)
.withWriteDisposition(WRITE_TRUNCATE)
.withSchema(getTableSchema()));
pipeline.run();
}
private static TableSchema getTableSchema() {
List<TableFieldSchema> fields = new ArrayList<>();
fields.add(new TableFieldSchema().setName("id").setType("INTEGER"));
fields.add(new TableFieldSchema().setName("first").setType("STRING"));
fields.add(new TableFieldSchema().setName("last").setType("STRING"));
return new TableSchema().setFields(fields);
}
}
输出日志:
00:10:54,011 0 [direct-runner-worker] INFO org.apache.beam.sdk.io.gcp.bigquery.BatchLoads - Writing BigQuery temporary files to gs://spanner-dataflow-bq/tmp/BigQueryWriteTemp/beam_load_templatepipelinegrahampolley0531141053eff9d0d4_3dd2ba3a1c0347cf860241ddcd310a12/ before loading them.
00:10:59,332 5321 [direct-runner-worker] INFO org.apache.beam.sdk.io.gcp.bigquery.TableRowWriter - Opening TableRowWriter to gs://spanner-dataflow-bq/tmp/BigQueryWriteTemp/beam_load_templatepipelinegrahampolley0531141053eff9d0d4_3dd2ba3a1c0347cf860241ddcd310a12/c374d44a-a7db-407e-aaa4-fe6aa5f6a9ef.
00:11:01,178 7167 [direct-runner-worker] INFO org.apache.beam.sdk.io.gcp.bigquery.WriteTables - Loading 1 files into {datasetId=spanner_to_bigquery, projectId=grey-sort-challenge, tableId=singers} using job {jobId=beam_load_templatepipelinegrahampolley0531141053eff9d0d4_3dd2ba3a1c0347cf860241ddcd310a12_b4b4722df4326c6f5a93d7824981dc73_00001_00000-0, location=australia-southeast1, projectId=grey-sort-challenge}, attempt 0
00:11:02,495 8484 [direct-runner-worker] INFO org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl - Started BigQuery job: {jobId=beam_load_templatepipelinegrahampolley0531141053eff9d0d4_3dd2ba3a1c0347cf860241ddcd310a12_b4b4722df4326c6f5a93d7824981dc73_00001_00000-0, location=australia-southeast1, projectId=grey-sort-challenge}.
bq show -j --format=prettyjson --project_id=grey-sort-challenge beam_load_templatepipelinegrahampolley0531141053eff9d0d4_3dd2ba3a1c0347cf860241ddcd310a12_b4b4722df4326c6f5a93d7824981dc73_00001_00000-0
00:11:02,495 8484 [direct-runner-worker] INFO org.apache.beam.sdk.io.gcp.bigquery.WriteTables - Load job {jobId=beam_load_templatepipelinegrahampolley0531141053eff9d0d4_3dd2ba3a1c0347cf860241ddcd310a12_b4b4722df4326c6f5a93d7824981dc73_00001_00000-0, location=australia-southeast1, projectId=grey-sort-challenge} started
00:11:03,183 9172 [direct-runner-worker] INFO org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl - Still waiting for BigQuery job beam_load_templatepipelinegrahampolley0531141053eff9d0d4_3dd2ba3a1c0347cf860241ddcd310a12_b4b4722df4326c6f5a93d7824981dc73_00001_00000-0, currently in status {"state":"RUNNING"}
bq show -j --format=prettyjson --project_id=grey-sort-challenge beam_load_templatepipelinegrahampolley0531141053eff9d0d4_3dd2ba3a1c0347cf860241ddcd310a12_b4b4722df4326c6f5a93d7824981dc73_00001_00000-0
00:11:05,043 11032 [direct-runner-worker] INFO org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl - BigQuery job {jobId=beam_load_templatepipelinegrahampolley0531141053eff9d0d4_3dd2ba3a1c0347cf860241ddcd310a12_b4b4722df4326c6f5a93d7824981dc73_00001_00000-0, location=australia-southeast1, projectId=grey-sort-challenge} completed in state DONE
00:11:05,044 11033 [direct-runner-worker] INFO org.apache.beam.sdk.io.gcp.bigquery.WriteTables - Load job {jobId=beam_load_templatepipelinegrahampolley0531141053eff9d0d4_3dd2ba3a1c0347cf860241ddcd310a12_b4b4722df4326c6f5a93d7824981dc73_00001_00000-0, location=australia-southeast1, projectId=grey-sort-challenge} succeeded. Statistics: {"completionRatio":1.0,"creationTime":"1559311861461","endTime":"1559311863323","load":{"badRecords":"0","inputFileBytes":"81","inputFiles":"1","outputBytes":"45","outputRows":"2"},"startTime":"1559311862043","totalSlotMs":"218","reservationUsage":[{"name":"default-pipeline","slotMs":"218"}]}
任务:我们必须设置从 Spanner 到 Big Query 的记录的定期同步。我们的 Spanner 数据库具有关系 table 层次结构。
考虑了选项 我正在考虑使用数据流模板来设置此数据管道。
选项 1:使用 Dataflow 模板 'Cloud Spanner to Cloud Storage Text' 设置一个作业,然后使用 Dataflow 模板'Cloud Storage 文本到 BigQuery'。 Con:第一个模板仅适用于单个 table,我们有许多 table 要导出。
Option2:使用'Cloud Spanner to Cloud Storage Avro'模板导出整个数据库。 缺点:我只需要在数据库中导出选定的 table,我没有看到将 Avro 导入 Big Query 的模板。
问题:请提出设置此管道的最佳选择
目前没有 off-the-shelf 从 Cloud Spanner 到 BigQuery 的参数化直接导出。
为满足您的要求,自定义数据流作业 (spanner dataflow connector, dataflow templates) scheduled periodically (1, 2) would be the best bet. Incremental exports would require implementing change tracking in you database which can be done with commit timestamps.
对于 no-code 解决方案,您必须放宽要求并定期将所有表批量导出到 Cloud Storage 并定期将它们批量导入 BigQuery。您可以使用 periodic trigger of an export from Cloud Spanner to Cloud Storage and schedule a periodic import from Cloud Storage to BigQuery.
的组合使用单个 Dataflow 管道在一个 shot/pass 中完成。这是我使用 Java SDK 编写的示例,可帮助您入门。它从 Spanner 读取,使用 ParDo
将其转换为 BigQuery TableRow
,然后在最后写入 BigQuery。在引擎盖下它使用的是 GCS,但作为用户,这一切都是从你那里抽象出来的。
package org.polleyg;
import com.google.api.services.bigquery.model.TableFieldSchema;
import com.google.api.services.bigquery.model.TableRow;
import com.google.api.services.bigquery.model.TableSchema;
import com.google.cloud.spanner.Struct;
import org.apache.beam.runners.dataflow.options.DataflowPipelineOptions;
import org.apache.beam.sdk.Pipeline;
import org.apache.beam.sdk.io.gcp.bigquery.BigQueryIO;
import org.apache.beam.sdk.io.gcp.spanner.SpannerIO;
import org.apache.beam.sdk.options.PipelineOptionsFactory;
import org.apache.beam.sdk.transforms.DoFn;
import org.apache.beam.sdk.transforms.ParDo;
import org.apache.beam.sdk.values.PCollection;
import java.util.ArrayList;
import java.util.List;
import static org.apache.beam.sdk.io.gcp.bigquery.BigQueryIO.Write.CreateDisposition.CREATE_IF_NEEDED;
import static org.apache.beam.sdk.io.gcp.bigquery.BigQueryIO.Write.WriteDisposition.WRITE_TRUNCATE;
/**
* Do some randomness
*/
public class TemplatePipeline {
public static void main(String[] args) {
PipelineOptionsFactory.register(DataflowPipelineOptions.class);
DataflowPipelineOptions options = PipelineOptionsFactory.fromArgs(args).withValidation().as(DataflowPipelineOptions.class);
Pipeline pipeline = Pipeline.create(options);
PCollection<Struct> records = pipeline.apply("read_from_spanner",
SpannerIO.read()
.withInstanceId("spanner-to-dataflow-to-bq")
.withDatabaseId("the-dude")
.withQuery("SELECT * FROM Singers"));
records.apply("convert-2-bq-row", ParDo.of(new DoFn<Struct, TableRow>() {
@ProcessElement
public void processElement(ProcessContext c) throws Exception {
TableRow row = new TableRow();
row.set("id", c.element().getLong("SingerId"));
row.set("first", c.element().getString("FirstName"));
row.set("last", c.element().getString("LastName"));
c.output(row);
}
})).apply("write-to-bq", BigQueryIO.writeTableRows()
.to(String.format("%s:spanner_to_bigquery.singers", options.getProject()))
.withCreateDisposition(CREATE_IF_NEEDED)
.withWriteDisposition(WRITE_TRUNCATE)
.withSchema(getTableSchema()));
pipeline.run();
}
private static TableSchema getTableSchema() {
List<TableFieldSchema> fields = new ArrayList<>();
fields.add(new TableFieldSchema().setName("id").setType("INTEGER"));
fields.add(new TableFieldSchema().setName("first").setType("STRING"));
fields.add(new TableFieldSchema().setName("last").setType("STRING"));
return new TableSchema().setFields(fields);
}
}
输出日志:
00:10:54,011 0 [direct-runner-worker] INFO org.apache.beam.sdk.io.gcp.bigquery.BatchLoads - Writing BigQuery temporary files to gs://spanner-dataflow-bq/tmp/BigQueryWriteTemp/beam_load_templatepipelinegrahampolley0531141053eff9d0d4_3dd2ba3a1c0347cf860241ddcd310a12/ before loading them.
00:10:59,332 5321 [direct-runner-worker] INFO org.apache.beam.sdk.io.gcp.bigquery.TableRowWriter - Opening TableRowWriter to gs://spanner-dataflow-bq/tmp/BigQueryWriteTemp/beam_load_templatepipelinegrahampolley0531141053eff9d0d4_3dd2ba3a1c0347cf860241ddcd310a12/c374d44a-a7db-407e-aaa4-fe6aa5f6a9ef.
00:11:01,178 7167 [direct-runner-worker] INFO org.apache.beam.sdk.io.gcp.bigquery.WriteTables - Loading 1 files into {datasetId=spanner_to_bigquery, projectId=grey-sort-challenge, tableId=singers} using job {jobId=beam_load_templatepipelinegrahampolley0531141053eff9d0d4_3dd2ba3a1c0347cf860241ddcd310a12_b4b4722df4326c6f5a93d7824981dc73_00001_00000-0, location=australia-southeast1, projectId=grey-sort-challenge}, attempt 0
00:11:02,495 8484 [direct-runner-worker] INFO org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl - Started BigQuery job: {jobId=beam_load_templatepipelinegrahampolley0531141053eff9d0d4_3dd2ba3a1c0347cf860241ddcd310a12_b4b4722df4326c6f5a93d7824981dc73_00001_00000-0, location=australia-southeast1, projectId=grey-sort-challenge}.
bq show -j --format=prettyjson --project_id=grey-sort-challenge beam_load_templatepipelinegrahampolley0531141053eff9d0d4_3dd2ba3a1c0347cf860241ddcd310a12_b4b4722df4326c6f5a93d7824981dc73_00001_00000-0
00:11:02,495 8484 [direct-runner-worker] INFO org.apache.beam.sdk.io.gcp.bigquery.WriteTables - Load job {jobId=beam_load_templatepipelinegrahampolley0531141053eff9d0d4_3dd2ba3a1c0347cf860241ddcd310a12_b4b4722df4326c6f5a93d7824981dc73_00001_00000-0, location=australia-southeast1, projectId=grey-sort-challenge} started
00:11:03,183 9172 [direct-runner-worker] INFO org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl - Still waiting for BigQuery job beam_load_templatepipelinegrahampolley0531141053eff9d0d4_3dd2ba3a1c0347cf860241ddcd310a12_b4b4722df4326c6f5a93d7824981dc73_00001_00000-0, currently in status {"state":"RUNNING"}
bq show -j --format=prettyjson --project_id=grey-sort-challenge beam_load_templatepipelinegrahampolley0531141053eff9d0d4_3dd2ba3a1c0347cf860241ddcd310a12_b4b4722df4326c6f5a93d7824981dc73_00001_00000-0
00:11:05,043 11032 [direct-runner-worker] INFO org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl - BigQuery job {jobId=beam_load_templatepipelinegrahampolley0531141053eff9d0d4_3dd2ba3a1c0347cf860241ddcd310a12_b4b4722df4326c6f5a93d7824981dc73_00001_00000-0, location=australia-southeast1, projectId=grey-sort-challenge} completed in state DONE
00:11:05,044 11033 [direct-runner-worker] INFO org.apache.beam.sdk.io.gcp.bigquery.WriteTables - Load job {jobId=beam_load_templatepipelinegrahampolley0531141053eff9d0d4_3dd2ba3a1c0347cf860241ddcd310a12_b4b4722df4326c6f5a93d7824981dc73_00001_00000-0, location=australia-southeast1, projectId=grey-sort-challenge} succeeded. Statistics: {"completionRatio":1.0,"creationTime":"1559311861461","endTime":"1559311863323","load":{"badRecords":"0","inputFileBytes":"81","inputFiles":"1","outputBytes":"45","outputRows":"2"},"startTime":"1559311862043","totalSlotMs":"218","reservationUsage":[{"name":"default-pipeline","slotMs":"218"}]}