在 Azure Blob 存储上使用 Delta Lake 时无法重命名“_delta_log”目录中的 json 文件
Fail to rename json files in the "_delta_log" directory when using Delta Lake on Azure Blob Storage
重命名 _delta_log json 文件时遇到问题,以防对单个 table
进行并行追加操作
Attempt recovered after RM restartUser class threw exception: java.io.IOException: rename from wasbs://<container_name>@.blob.core.windows.net/delta_table/_delta_log/.00000000000000000243.json.f0bf5c51-b7ae-4da8-931e-b1acc21170f5.tmp to wasbs://<container_name>@.blob.core.windows.net/delta_table/_delta_log/00000000000000000243.json failed.
这里我使用的是0.5.0版本的delta
,请检查下面的堆栈跟踪
at org.apache.hadoop.fs.FileSystem.rename(FileSystem.java:1548)
at org.apache.hadoop.fs.DelegateToFileSystem.renameInternal(DelegateToFileSystem.java:204)
at org.apache.hadoop.fs.AbstractFileSystem.renameInternal(AbstractFileSystem.java:769)
at org.apache.hadoop.fs.AbstractFileSystem.rename(AbstractFileSystem.java:699)
at org.apache.hadoop.fs.FileContext.rename(FileContext.java:1032)
at org.apache.spark.sql.delta.storage.HDFSLogStore.writeInternal(HDFSLogStore.scala:102)
at org.apache.spark.sql.delta.storage.HDFSLogStore.write(HDFSLogStore.scala:78)
at org.apache.spark.sql.delta.OptimisticTransactionImpl$$anonfun$org$apache$spark$sql$delta$OptimisticTransactionImpl$$doCommit.apply$mcJ$sp(OptimisticTransaction.scala:388)
at org.apache.spark.sql.delta.OptimisticTransactionImpl$$anonfun$org$apache$spark$sql$delta$OptimisticTransactionImpl$$doCommit.apply(OptimisticTransaction.scala:383)
at org.apache.spark.sql.delta.OptimisticTransactionImpl$$anonfun$org$apache$spark$sql$delta$OptimisticTransactionImpl$$doCommit.apply(OptimisticTransaction.scala:383)
at org.apache.spark.sql.delta.DeltaLog.lockInterruptibly(DeltaLog.scala:207)
at org.apache.spark.sql.delta.OptimisticTransactionImpl$class.org$apache$spark$sql$delta$OptimisticTransactionImpl$$doCommit(OptimisticTransaction.scala:382)
at org.apache.spark.sql.delta.OptimisticTransactionImpl$$anonfun$checkAndRetry.apply$mcJ$sp(OptimisticTransaction.scala:550)
at org.apache.spark.sql.delta.OptimisticTransactionImpl$$anonfun$checkAndRetry.apply(OptimisticTransaction.scala:449)
at org.apache.spark.sql.delta.OptimisticTransactionImpl$$anonfun$checkAndRetry.apply(OptimisticTransaction.scala:449)
at com.databricks.spark.util.DatabricksLogging$class.recordOperation(DatabricksLogging.scala:77)
at org.apache.spark.sql.delta.OptimisticTransaction.recordOperation(OptimisticTransaction.scala:78)
at org.apache.spark.sql.delta.metering.DeltaLogging$class.recordDeltaOperation(DeltaLogging.scala:103)
at org.apache.spark.sql.delta.OptimisticTransaction.recordDeltaOperation(OptimisticTransaction.scala:78)
at org.apache.spark.sql.delta.OptimisticTransactionImpl$class.checkAndRetry(OptimisticTransaction.scala:449)
at org.apache.spark.sql.delta.OptimisticTransaction.checkAndRetry(OptimisticTransaction.scala:78)
at org.apache.spark.sql.delta.OptimisticTransactionImpl$$anonfun$org$apache$spark$sql$delta$OptimisticTransactionImpl$$doCommit.apply$mcJ$sp(OptimisticTransaction.scala:433)
at org.apache.spark.sql.delta.OptimisticTransactionImpl$$anonfun$org$apache$spark$sql$delta$OptimisticTransactionImpl$$doCommit.apply(OptimisticTransaction.scala:383)
at org.apache.spark.sql.delta.OptimisticTransactionImpl$$anonfun$org$apache$spark$sql$delta$OptimisticTransactionImpl$$doCommit.apply(OptimisticTransaction.scala:383)
at org.apache.spark.sql.delta.DeltaLog.lockInterruptibly(DeltaLog.scala:207)
at org.apache.spark.sql.delta.OptimisticTransactionImpl$class.org$apache$spark$sql$delta$OptimisticTransactionImpl$$doCommit(OptimisticTransaction.scala:382)
at org.apache.spark.sql.delta.OptimisticTransactionImpl$$anonfun$commit.apply$mcJ$sp(OptimisticTransaction.scala:293)
at org.apache.spark.sql.delta.OptimisticTransactionImpl$$anonfun$commit.apply(OptimisticTransaction.scala:252)
at org.apache.spark.sql.delta.OptimisticTransactionImpl$$anonfun$commit.apply(OptimisticTransaction.scala:252)
at com.databricks.spark.util.DatabricksLogging$class.recordOperation(DatabricksLogging.scala:77)
at org.apache.spark.sql.delta.OptimisticTransaction.recordOperation(OptimisticTransaction.scala:78)
at org.apache.spark.sql.delta.metering.DeltaLogging$class.recordDeltaOperation(DeltaLogging.scala:103)
at org.apache.spark.sql.delta.OptimisticTransaction.recordDeltaOperation(OptimisticTransaction.scala:78)
at org.apache.spark.sql.delta.OptimisticTransactionImpl$class.commit(OptimisticTransaction.scala:252)
at org.apache.spark.sql.delta.OptimisticTransaction.commit(OptimisticTransaction.scala:78)
at org.apache.spark.sql.delta.commands.WriteIntoDelta$$anonfun$run.apply(WriteIntoDelta.scala:67)
at org.apache.spark.sql.delta.commands.WriteIntoDelta$$anonfun$run.apply(WriteIntoDelta.scala:64)
at org.apache.spark.sql.delta.DeltaLog.withNewTransaction(DeltaLog.scala:396)
at org.apache.spark.sql.delta.commands.WriteIntoDelta.run(WriteIntoDelta.scala:64)
at org.apache.spark.sql.delta.sources.DeltaDataSource.createRelation(DeltaDataSource.scala:133)
at org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:45)
at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
at org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:86)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute.apply(SparkPlan.scala:131)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute.apply(SparkPlan.scala:127)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery.apply(SparkPlan.scala:155)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:80)
at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:80)
at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand.apply(DataFrameWriter.scala:676)
at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand.apply(DataFrameWriter.scala:676)
at org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId.apply(SQLExecution.scala:78)
at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:125)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:73)
at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:676)
at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:285)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:271)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:229)
堆栈跟踪显示您使用了不正确的 LogStore 实现:HDFSLogStore。 HDFSLogStore 是为 Hadoop distributed file system (HDFS).
构建的
为了在 Delta Lake 中使用 Azure Blob 存储,应设置以下配置。
spark.delta.logStore.class=org.apache.spark.sql.delta.storage.AzureLogStore
有关完整说明,请参阅 https://docs.delta.io/latest/delta-storage.html#azure-blob-storage。
重命名 _delta_log json 文件时遇到问题,以防对单个 table
进行并行追加操作Attempt recovered after RM restartUser class threw exception: java.io.IOException: rename from wasbs://<container_name>@.blob.core.windows.net/delta_table/_delta_log/.00000000000000000243.json.f0bf5c51-b7ae-4da8-931e-b1acc21170f5.tmp to wasbs://<container_name>@.blob.core.windows.net/delta_table/_delta_log/00000000000000000243.json failed.
这里我使用的是0.5.0版本的delta ,请检查下面的堆栈跟踪
at org.apache.hadoop.fs.FileSystem.rename(FileSystem.java:1548)
at org.apache.hadoop.fs.DelegateToFileSystem.renameInternal(DelegateToFileSystem.java:204)
at org.apache.hadoop.fs.AbstractFileSystem.renameInternal(AbstractFileSystem.java:769)
at org.apache.hadoop.fs.AbstractFileSystem.rename(AbstractFileSystem.java:699)
at org.apache.hadoop.fs.FileContext.rename(FileContext.java:1032)
at org.apache.spark.sql.delta.storage.HDFSLogStore.writeInternal(HDFSLogStore.scala:102)
at org.apache.spark.sql.delta.storage.HDFSLogStore.write(HDFSLogStore.scala:78)
at org.apache.spark.sql.delta.OptimisticTransactionImpl$$anonfun$org$apache$spark$sql$delta$OptimisticTransactionImpl$$doCommit.apply$mcJ$sp(OptimisticTransaction.scala:388)
at org.apache.spark.sql.delta.OptimisticTransactionImpl$$anonfun$org$apache$spark$sql$delta$OptimisticTransactionImpl$$doCommit.apply(OptimisticTransaction.scala:383)
at org.apache.spark.sql.delta.OptimisticTransactionImpl$$anonfun$org$apache$spark$sql$delta$OptimisticTransactionImpl$$doCommit.apply(OptimisticTransaction.scala:383)
at org.apache.spark.sql.delta.DeltaLog.lockInterruptibly(DeltaLog.scala:207)
at org.apache.spark.sql.delta.OptimisticTransactionImpl$class.org$apache$spark$sql$delta$OptimisticTransactionImpl$$doCommit(OptimisticTransaction.scala:382)
at org.apache.spark.sql.delta.OptimisticTransactionImpl$$anonfun$checkAndRetry.apply$mcJ$sp(OptimisticTransaction.scala:550)
at org.apache.spark.sql.delta.OptimisticTransactionImpl$$anonfun$checkAndRetry.apply(OptimisticTransaction.scala:449)
at org.apache.spark.sql.delta.OptimisticTransactionImpl$$anonfun$checkAndRetry.apply(OptimisticTransaction.scala:449)
at com.databricks.spark.util.DatabricksLogging$class.recordOperation(DatabricksLogging.scala:77)
at org.apache.spark.sql.delta.OptimisticTransaction.recordOperation(OptimisticTransaction.scala:78)
at org.apache.spark.sql.delta.metering.DeltaLogging$class.recordDeltaOperation(DeltaLogging.scala:103)
at org.apache.spark.sql.delta.OptimisticTransaction.recordDeltaOperation(OptimisticTransaction.scala:78)
at org.apache.spark.sql.delta.OptimisticTransactionImpl$class.checkAndRetry(OptimisticTransaction.scala:449)
at org.apache.spark.sql.delta.OptimisticTransaction.checkAndRetry(OptimisticTransaction.scala:78)
at org.apache.spark.sql.delta.OptimisticTransactionImpl$$anonfun$org$apache$spark$sql$delta$OptimisticTransactionImpl$$doCommit.apply$mcJ$sp(OptimisticTransaction.scala:433)
at org.apache.spark.sql.delta.OptimisticTransactionImpl$$anonfun$org$apache$spark$sql$delta$OptimisticTransactionImpl$$doCommit.apply(OptimisticTransaction.scala:383)
at org.apache.spark.sql.delta.OptimisticTransactionImpl$$anonfun$org$apache$spark$sql$delta$OptimisticTransactionImpl$$doCommit.apply(OptimisticTransaction.scala:383)
at org.apache.spark.sql.delta.DeltaLog.lockInterruptibly(DeltaLog.scala:207)
at org.apache.spark.sql.delta.OptimisticTransactionImpl$class.org$apache$spark$sql$delta$OptimisticTransactionImpl$$doCommit(OptimisticTransaction.scala:382)
at org.apache.spark.sql.delta.OptimisticTransactionImpl$$anonfun$commit.apply$mcJ$sp(OptimisticTransaction.scala:293)
at org.apache.spark.sql.delta.OptimisticTransactionImpl$$anonfun$commit.apply(OptimisticTransaction.scala:252)
at org.apache.spark.sql.delta.OptimisticTransactionImpl$$anonfun$commit.apply(OptimisticTransaction.scala:252)
at com.databricks.spark.util.DatabricksLogging$class.recordOperation(DatabricksLogging.scala:77)
at org.apache.spark.sql.delta.OptimisticTransaction.recordOperation(OptimisticTransaction.scala:78)
at org.apache.spark.sql.delta.metering.DeltaLogging$class.recordDeltaOperation(DeltaLogging.scala:103)
at org.apache.spark.sql.delta.OptimisticTransaction.recordDeltaOperation(OptimisticTransaction.scala:78)
at org.apache.spark.sql.delta.OptimisticTransactionImpl$class.commit(OptimisticTransaction.scala:252)
at org.apache.spark.sql.delta.OptimisticTransaction.commit(OptimisticTransaction.scala:78)
at org.apache.spark.sql.delta.commands.WriteIntoDelta$$anonfun$run.apply(WriteIntoDelta.scala:67)
at org.apache.spark.sql.delta.commands.WriteIntoDelta$$anonfun$run.apply(WriteIntoDelta.scala:64)
at org.apache.spark.sql.delta.DeltaLog.withNewTransaction(DeltaLog.scala:396)
at org.apache.spark.sql.delta.commands.WriteIntoDelta.run(WriteIntoDelta.scala:64)
at org.apache.spark.sql.delta.sources.DeltaDataSource.createRelation(DeltaDataSource.scala:133)
at org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:45)
at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
at org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:86)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute.apply(SparkPlan.scala:131)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute.apply(SparkPlan.scala:127)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery.apply(SparkPlan.scala:155)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:80)
at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:80)
at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand.apply(DataFrameWriter.scala:676)
at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand.apply(DataFrameWriter.scala:676)
at org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId.apply(SQLExecution.scala:78)
at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:125)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:73)
at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:676)
at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:285)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:271)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:229)
堆栈跟踪显示您使用了不正确的 LogStore 实现:HDFSLogStore。 HDFSLogStore 是为 Hadoop distributed file system (HDFS).
构建的为了在 Delta Lake 中使用 Azure Blob 存储,应设置以下配置。
spark.delta.logStore.class=org.apache.spark.sql.delta.storage.AzureLogStore
有关完整说明,请参阅 https://docs.delta.io/latest/delta-storage.html#azure-blob-storage。