使用 SchemaAndRecord class 从 Table 读取 BigQuery 数值数据类型
Reading BigQuery Numeric Data Type From Table Using SchemaAndRecord class
在开发我的代码时,我使用以下代码片段从 BigQuery 读取 table 数据。
PCollection<ReasonCode> gpseEftReasonCodes = input.
apply("Reading xxyyzz",
BigQueryIO.read(new
ReadTable<ReasonCode>(ReasonCode.class))
.withoutValidation().withTemplateCompatibility()
.fromQuery("Select * from dataset.xxyyzz").usingStandardSql()
.withCoder(SerializableCoder.of(xxyyzz.class))
阅读TableClass:
@DefaultSchema(JavaBeanSchema.class)
public class ReadTable<T> implements SerializableFunction<SchemaAndRecord, T> {
private static final long serialVersionUID = 1L;
private static Gson gson = new Gson();
public static final Logger LOG = LoggerFactory.getLogger(ReadTable.class);
private final Counter countingRecords = Metrics.counter(ReadTable.class,"Reading Records EFT Report");
private Class<T> class1;
public ReadTable(Class<T> class1) {
this.class1 = class1;
}
public T apply(SchemaAndRecord schemaAndRecord) {
Map<String, String> mapping = new HashMap<>();
int counter = 0;
try {
GenericRecord s = schemaAndRecord.getRecord();
org.apache.avro.Schema s1 = s.getSchema();
for (Field f : s1.getFields()) {
counter++;
mapping.put(f.name(), null==s.get(f.name())?null:String.valueOf(s.get(counter)));
}
countingRecords.inc();
JsonElement jsonElement = gson.toJsonTree(mapping);
return gson.fromJson(jsonElement, class1);
}catch(Exception mp) {
LOG.error("Found Wrong Mapping for the Record: "+mapping);
mp.printStackTrace();
return null;
}
}
}
因此,在从 Bigquery 读取数据后,我将数据从 SchemaAndRecord 映射到 pojo,我正在获取数据类型为数字的列的值,如下所述。
last_update_amount=java.nio.HeapByteBuffer[pos=0 lim=16 cap=16]
我的期望是我会得到准确的值,但是我使用的 HyperByte Buffer 版本是 Apache beam 2.12.0。
如果需要更多信息,请告诉我。
方法 2 尝试过:
GenericRecord s = schemaAndRecord.getRecord();
org.apache.avro.Schema s1 = s.getSchema();
for (Field f : s1.getFields()) {
counter++;
mapping.put(f.name(), null==s.get(f.name())?null:String.valueOf(s.get(counter)));
if(f.name().equalsIgnoreCase("reason_code_id")) {
BigDecimal numericValue =
new Conversions.DecimalConversion()
.fromBytes((ByteBuffer)s.get(f.name()) , Schema.create(s1.getType()), s1.getLogicalType());
System.out.println("Numeric Con"+numericValue);
}
else {
System.out.println("Else Condition "+f.name());
}
}
```
Facing Issue:
2019-05-24 (14:10:37) org.apache.avro.AvroRuntimeException: Can't create a: RECORD
Way 2:
GenericRecord s = schemaAndRecord.getRecord();
org.apache.avro.Schema s1 = s.getSchema();
for (Field f : s1.getFields()) {
counter++;
mapping.put(f.name(), null==s.get(f.name())?null:String.valueOf(s.get(counter)));
if(f.name().equalsIgnoreCase("reason_code_id")) {
BigDecimal numericValue =
new Conversions.DecimalConversion()
.fromBytes((ByteBuffer)s.get(f.name()) , Schema.create(s1.getType()), s1.getLogicalType());
System.out.println("Numeric Con"+numericValue);
}
else {
System.out.println("Else Condition "+f.name());
}
}
```
Facing Issue:
2019-05-24 (14:10:37) org.apache.avro.AvroRuntimeException: Can't create a: RECORD
StackTrace
java.io.IOException: Failed to start reading from source: gs://trusted-bucket/mgp/temp/BigQueryExtractTemp/3a5365f1e53d4dd393f0eda15a2c6bd4/000000000000.avro range [0, 65461)
at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.start(WorkerCustomSources.java:596)
at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.start(ReadOperation.java:361)
at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:194)
at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:159)
at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:77)
at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:411)
at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:380)
at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:306)
at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:135)
at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:115)
at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:102)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.avro.AvroRuntimeException: Can't create a: RECORD
at org.apache.avro.Schema.create(Schema.java:120)
at com.globalpay.WelcomeEmail.mapRecordToObject(WelcomeEmail.java:118)
at com.globalpay.WelcomeEmail.access[=15=](WelcomeEmail.java:112)
at com.globalpay.WelcomeEmail.apply(WelcomeEmail.java:54)
at com.globalpay.WelcomeEmail.apply(WelcomeEmail.java:1)
at org.apache.beam.sdk.io.gcp.bigquery.BigQuerySourceBase.apply(BigQuerySourceBase.java:221)
at org.apache.beam.sdk.io.gcp.bigquery.BigQuerySourceBase.apply(BigQuerySourceBase.java:214)
at org.apache.beam.sdk.io.AvroSource$AvroBlock.readNextRecord(AvroSource.java:567)
at org.apache.beam.sdk.io.BlockBasedSource$BlockBasedReader.readNextRecord(BlockBasedSource.java:209)
at org.apache.beam.sdk.io.FileBasedSource$FileBasedReader.advanceImpl(FileBasedSource.java:484)
at org.apache.beam.sdk.io.FileBasedSource$FileBasedReader.startImpl(FileBasedSource.java:479)
at org.apache.beam.sdk.io.OffsetBasedSource$OffsetBasedReader.start(OffsetBasedSource.java:249)
at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.start(WorkerCustomSources.java:593)
... 14 more
总体方法是正确的。很难弄清楚到底出了什么问题。如果可能,请粘贴完整的堆栈跟踪。此外,查看如何使用 BigQueryIO.read()
的示例,它们可能会有所帮助:https://beam.apache.org/releases/javadoc/2.13.0/org/apache/beam/sdk/io/gcp/bigquery/BigQueryIO.html
您可以使用 readTableRows()
而不是 read()
来获取解析后的值。或者按照 TableRowParser
实现的示例,了解此类解析器如何工作(在 readTableRows()
中使用):https://github.com/apache/beam/blob/79d478a83be221461add1501e218b9a4308f9ec8/sdks/java/io/google-cloud-platform/src/main/java/org/apache/beam/sdk/io/gcp/bigquery/BigQueryIO.java#L449
更新
显然最近添加了使用 Beam 模式读取行的功能:https://github.com/apache/beam/pull/8620
您现在应该能够按照这些思路做一些事情了:
p.apply(BigQueryIO.readTableRowsWithSchema())
.apply(Convert.to(PojoClass.class));
在开发我的代码时,我使用以下代码片段从 BigQuery 读取 table 数据。
PCollection<ReasonCode> gpseEftReasonCodes = input.
apply("Reading xxyyzz",
BigQueryIO.read(new
ReadTable<ReasonCode>(ReasonCode.class))
.withoutValidation().withTemplateCompatibility()
.fromQuery("Select * from dataset.xxyyzz").usingStandardSql()
.withCoder(SerializableCoder.of(xxyyzz.class))
阅读TableClass:
@DefaultSchema(JavaBeanSchema.class)
public class ReadTable<T> implements SerializableFunction<SchemaAndRecord, T> {
private static final long serialVersionUID = 1L;
private static Gson gson = new Gson();
public static final Logger LOG = LoggerFactory.getLogger(ReadTable.class);
private final Counter countingRecords = Metrics.counter(ReadTable.class,"Reading Records EFT Report");
private Class<T> class1;
public ReadTable(Class<T> class1) {
this.class1 = class1;
}
public T apply(SchemaAndRecord schemaAndRecord) {
Map<String, String> mapping = new HashMap<>();
int counter = 0;
try {
GenericRecord s = schemaAndRecord.getRecord();
org.apache.avro.Schema s1 = s.getSchema();
for (Field f : s1.getFields()) {
counter++;
mapping.put(f.name(), null==s.get(f.name())?null:String.valueOf(s.get(counter)));
}
countingRecords.inc();
JsonElement jsonElement = gson.toJsonTree(mapping);
return gson.fromJson(jsonElement, class1);
}catch(Exception mp) {
LOG.error("Found Wrong Mapping for the Record: "+mapping);
mp.printStackTrace();
return null;
}
}
}
因此,在从 Bigquery 读取数据后,我将数据从 SchemaAndRecord 映射到 pojo,我正在获取数据类型为数字的列的值,如下所述。
last_update_amount=java.nio.HeapByteBuffer[pos=0 lim=16 cap=16]
我的期望是我会得到准确的值,但是我使用的 HyperByte Buffer 版本是 Apache beam 2.12.0。 如果需要更多信息,请告诉我。
方法 2 尝试过:
GenericRecord s = schemaAndRecord.getRecord();
org.apache.avro.Schema s1 = s.getSchema();
for (Field f : s1.getFields()) {
counter++;
mapping.put(f.name(), null==s.get(f.name())?null:String.valueOf(s.get(counter)));
if(f.name().equalsIgnoreCase("reason_code_id")) {
BigDecimal numericValue =
new Conversions.DecimalConversion()
.fromBytes((ByteBuffer)s.get(f.name()) , Schema.create(s1.getType()), s1.getLogicalType());
System.out.println("Numeric Con"+numericValue);
}
else {
System.out.println("Else Condition "+f.name());
}
}
```
Facing Issue:
2019-05-24 (14:10:37) org.apache.avro.AvroRuntimeException: Can't create a: RECORD
Way 2:
GenericRecord s = schemaAndRecord.getRecord();
org.apache.avro.Schema s1 = s.getSchema();
for (Field f : s1.getFields()) {
counter++;
mapping.put(f.name(), null==s.get(f.name())?null:String.valueOf(s.get(counter)));
if(f.name().equalsIgnoreCase("reason_code_id")) {
BigDecimal numericValue =
new Conversions.DecimalConversion()
.fromBytes((ByteBuffer)s.get(f.name()) , Schema.create(s1.getType()), s1.getLogicalType());
System.out.println("Numeric Con"+numericValue);
}
else {
System.out.println("Else Condition "+f.name());
}
}
```
Facing Issue:
2019-05-24 (14:10:37) org.apache.avro.AvroRuntimeException: Can't create a: RECORD
StackTrace
java.io.IOException: Failed to start reading from source: gs://trusted-bucket/mgp/temp/BigQueryExtractTemp/3a5365f1e53d4dd393f0eda15a2c6bd4/000000000000.avro range [0, 65461)
at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.start(WorkerCustomSources.java:596)
at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.start(ReadOperation.java:361)
at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:194)
at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:159)
at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:77)
at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:411)
at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:380)
at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:306)
at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:135)
at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:115)
at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:102)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.avro.AvroRuntimeException: Can't create a: RECORD
at org.apache.avro.Schema.create(Schema.java:120)
at com.globalpay.WelcomeEmail.mapRecordToObject(WelcomeEmail.java:118)
at com.globalpay.WelcomeEmail.access[=15=](WelcomeEmail.java:112)
at com.globalpay.WelcomeEmail.apply(WelcomeEmail.java:54)
at com.globalpay.WelcomeEmail.apply(WelcomeEmail.java:1)
at org.apache.beam.sdk.io.gcp.bigquery.BigQuerySourceBase.apply(BigQuerySourceBase.java:221)
at org.apache.beam.sdk.io.gcp.bigquery.BigQuerySourceBase.apply(BigQuerySourceBase.java:214)
at org.apache.beam.sdk.io.AvroSource$AvroBlock.readNextRecord(AvroSource.java:567)
at org.apache.beam.sdk.io.BlockBasedSource$BlockBasedReader.readNextRecord(BlockBasedSource.java:209)
at org.apache.beam.sdk.io.FileBasedSource$FileBasedReader.advanceImpl(FileBasedSource.java:484)
at org.apache.beam.sdk.io.FileBasedSource$FileBasedReader.startImpl(FileBasedSource.java:479)
at org.apache.beam.sdk.io.OffsetBasedSource$OffsetBasedReader.start(OffsetBasedSource.java:249)
at org.apache.beam.runners.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.start(WorkerCustomSources.java:593)
... 14 more
总体方法是正确的。很难弄清楚到底出了什么问题。如果可能,请粘贴完整的堆栈跟踪。此外,查看如何使用 BigQueryIO.read()
的示例,它们可能会有所帮助:https://beam.apache.org/releases/javadoc/2.13.0/org/apache/beam/sdk/io/gcp/bigquery/BigQueryIO.html
您可以使用 readTableRows()
而不是 read()
来获取解析后的值。或者按照 TableRowParser
实现的示例,了解此类解析器如何工作(在 readTableRows()
中使用):https://github.com/apache/beam/blob/79d478a83be221461add1501e218b9a4308f9ec8/sdks/java/io/google-cloud-platform/src/main/java/org/apache/beam/sdk/io/gcp/bigquery/BigQueryIO.java#L449
更新
显然最近添加了使用 Beam 模式读取行的功能:https://github.com/apache/beam/pull/8620
您现在应该能够按照这些思路做一些事情了:
p.apply(BigQueryIO.readTableRowsWithSchema())
.apply(Convert.to(PojoClass.class));