带有 azure blob 存储的 azure HDInsight 上的增量表

delta tables on azure HDInsight with azure blob storage

我正在尝试从 HDInsight spark 2.4 写入增量 table。

我已经按照 https://docs.delta.io/latest/delta-storage.html#configure-for-azure-blob-storage

配置了我的工作

我有以下代码

myrdd.write().format("delta").mode(SaveMode.Append).partitionBy("col1","col2")
                    .save("wasbs://container@account.blob.core.windows.net/delta/table1");

写入成功,我看到 parquet 文件写入存储位置,但是当我查看 __deltalog 文件时。我没有看到写入的分区信息,请参阅下面的 partitionBy is empty array

{"commitInfo":{"timestamp":1586157735069,"operation":"WRITE","operationParameters":{"mode":"Append","partitionBy":"[]"},"isBlindAppend":true}}

此外,个别镶木地板文件的分区信息丢失

{"add":{"path":"part-00000-10341955-1490-4fc4-a66c-e7fdd6765fb2-c000.snappy.parquet","partitionValues":{},"size":10473576,"modificationTime":1586157604000,"dataChange":true}}
{"add":{"path":"part-00001-13651729-a04c-400e-ba42-242df2d0afd4-c000.snappy.parquet","partitionValues":{},"size":3884853,"modificationTime":1586157734000,"dataChange":true}}
{"add":{"path":"part-00002-dc29cc35-ef55-4f71-8195-927d76867195-c000.snappy.parquet","partitionValues":{},"size":2449481,"modificationTime":1586157371000,"dataChange":true}}
{"add":{"path":"part-00003-0a8028fa-e910-420b-aa82-b85f4ee1ce4a-c000.snappy.parquet","partitionValues":{},"size":2680111,"modificationTime":1586157441000,"dataChange":true}}
{"add":{"path":"part-00004-414dc827-2860-44f2-82ff-67e7c6f53e50-c000.snappy.parquet","partitionValues":{},"size":3321879,"modificationTime":1586157381000,"dataChange":true}}
{"add":{"path":"part-00005-b7bb3b28-a78a-4733-be54-e30d88b8d360-c000.snappy.parquet","partitionValues":{},"size":4634113,"modificationTime":1586157618000,"dataChange":true}}

我将以下包传递给我的 spark 提交

io.delta:delta-core_2.11:0.5.0,org.apache.hadoop:hadoop-azure:3.2.0

如果我遗漏了什么或解释不正确,请告诉我。

根据 delta lake 文档,spark 版本 2.4.2 提供对 delta lake 的支持

HDinsight spark 于 2020 年 7 月发布了新版本,其中包含 spark 2.4.4。

使用 spark 2.4.4 附带的较新版本的 HDInsight,我看到数据写入了适当的分区。