从 Azure Blob 容器中读取镶木地板数据,而无需在本地下载
Read parquet data from Azure Blob container without downloading it locally
我正在使用 azure SDK、avro-parquet 和 hadoop 库从 Blob 容器读取 parquet 文件。目前,我正在下载文件到临时文件,然后创建一个 ParquetReader。
try (InputStream input = blob.openInputStream()) {
Path tmp = Files.createTempFile("tempFile", ".parquet");
Files.copy(input, tmp, StandardCopyOption.REPLACE_EXISTING);
IOUtils.closeQuietly(input);
InputFile file = HadoopInputFile.fromPath(new org.apache.hadoop.fs.Path(tmp.toFile().getPath()),
new Configuration());
ParquetReader<GenericRecord> reader = AvroParquetReader.<GenericRecord> builder(file).build();
GenericRecord record;
while ((record = reader.read()) != null) {
recordList.add(record);
}
} catch (IOException | StorageException e) {
log.error(e.getMessage(), e);
}
我想使用来自 azure blob 项目的 inputStream 来读取这个文件,而不是将它下载到我的机器上。 S3()有这种方式,但是Azure有这种可能性吗?
了解如何做到这一点。
StorageCredentials credentials = new StorageCredentialsAccountAndKey(accountName, accountKey);
CloudStorageAccount connection = new CloudStorageAccount(credentials, true);
CloudBlobClient blobClient = connection.createCloudBlobClient();
CloudBlobContainer container = blobClient.getContainerReference(containerName);
CloudBlob blob = container.getBlockBlobReference(fileName);
Configuration config = new Configuration();
config.set("fs.azure", "org.apache.hadoop.fs.azure.NativeAzureFileSystem");
config.set("fs.azure.sas.<containerName>.<accountName>.blob.core.windows.net", token);
URI uri = new URI("wasbs://<containerName>@<accountName>.blob.core.windows.net/" + blob.getName());
InputFile file = HadoopInputFile.fromPath(new org.apache.hadoop.fs.Path(uri),
config);
ParquetReader<GenericRecord> reader = AvroParquetReader.<GenericRecord> builder(file).build();
GenericRecord record;
while ((record = reader.read()) != null) {
System.out.println(record);
}
reader.close();
我正在使用 azure SDK、avro-parquet 和 hadoop 库从 Blob 容器读取 parquet 文件。目前,我正在下载文件到临时文件,然后创建一个 ParquetReader。
try (InputStream input = blob.openInputStream()) {
Path tmp = Files.createTempFile("tempFile", ".parquet");
Files.copy(input, tmp, StandardCopyOption.REPLACE_EXISTING);
IOUtils.closeQuietly(input);
InputFile file = HadoopInputFile.fromPath(new org.apache.hadoop.fs.Path(tmp.toFile().getPath()),
new Configuration());
ParquetReader<GenericRecord> reader = AvroParquetReader.<GenericRecord> builder(file).build();
GenericRecord record;
while ((record = reader.read()) != null) {
recordList.add(record);
}
} catch (IOException | StorageException e) {
log.error(e.getMessage(), e);
}
我想使用来自 azure blob 项目的 inputStream 来读取这个文件,而不是将它下载到我的机器上。 S3(
了解如何做到这一点。
StorageCredentials credentials = new StorageCredentialsAccountAndKey(accountName, accountKey);
CloudStorageAccount connection = new CloudStorageAccount(credentials, true);
CloudBlobClient blobClient = connection.createCloudBlobClient();
CloudBlobContainer container = blobClient.getContainerReference(containerName);
CloudBlob blob = container.getBlockBlobReference(fileName);
Configuration config = new Configuration();
config.set("fs.azure", "org.apache.hadoop.fs.azure.NativeAzureFileSystem");
config.set("fs.azure.sas.<containerName>.<accountName>.blob.core.windows.net", token);
URI uri = new URI("wasbs://<containerName>@<accountName>.blob.core.windows.net/" + blob.getName());
InputFile file = HadoopInputFile.fromPath(new org.apache.hadoop.fs.Path(uri),
config);
ParquetReader<GenericRecord> reader = AvroParquetReader.<GenericRecord> builder(file).build();
GenericRecord record;
while ((record = reader.read()) != null) {
System.out.println(record);
}
reader.close();