从 ADLS Gen2 错误读取文件 - 配置 属性 xxx.dfs.core.windows.net 未找到
File read from ADLS Gen2 Error - Configuration property xxx.dfs.core.windows.net not found
我正在使用 ADLS Gen2,来自 Databricks 笔记本,试图使用 'abfss' 路径处理文件。
我能够很好地读取镶木地板文件,但是当我尝试加载 XML 文件时,我收到错误消息,未找到配置 - 未找到配置 属性 xxx.dfs.core.windows.net。
我没有尝试挂载该文件,但试图了解它是否是 XML 文件的已知限制,因为我能够很好地读取 parquet 文件。
这是我的 XML 库配置
com.databricks:spark-xml_2.11:0.9.0
我根据其他文章尝试了一些方法,但仍然遇到相同的错误。
- 添加了一个新范围以查看它是否是 Databricks 工作区中的范围问题。
- 尝试添加配置
spark.conf.set("fs.azure.account.key.xxxxx.dfs.core.windows.net", "xxxx==")
df = spark.read.format("xml")
.option("rootTag","BookArticle")
.option("inferSchema", "true")
.option("error_bad_lines",True)
.option("mode", "DROPMALFORMED")
.load(abfsssourcename) ##abfsssourcename is the path of the source file name
Exception Details: Py4JJavaError: An error occurred while calling o1113.load.
Configuration property xxxx.dfs.core.windows.net not found. at shaded.databricks.v20180920_b33d810.org.apache.hadoop.fs.azurebfs.AbfsConfiguration.getStorageAccountKey(AbfsConfiguration.java:392) at shaded.databricks.v20180920_b33d810.org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore.initializeClient(AzureBlobFileSystemStore.java:1008) at shaded.databricks.v20180920_b33d810.org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore.<init>(AzureBlobFileSystemStore.java:151) at shaded.databricks.v20180920_b33d810.org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem.initialize(AzureBlobFileSystem.java:106) at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2669) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370) at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295) at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.setInputPaths(FileInputFormat.java:500) at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.setInputPaths(FileInputFormat.java:469) at org.apache.spark.SparkContext$$anonfun$newAPIHadoopFile.apply(SparkContext.scala:1281) at org.apache.spark.SparkContext$$anonfun$newAPIHadoopFile.apply(SparkContext.scala:1269) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112) at org.apache.spark.SparkContext.withScope(SparkContext.scala:820) at org.apache.spark.SparkContext.newAPIHadoopFile(SparkContext.scala:1269) at com.databricks.spark.xml.util.XmlFile$.withCharset(XmlFile.scala:46) at com.databricks.spark.xml.DefaultSource$$anonfun$createRelation.apply(DefaultSource.scala:71) at com.databricks.spark.xml.DefaultSource$$anonfun$createRelation.apply(DefaultSource.scala:71) at com.databricks.spark.xml.XmlRelation$$anonfun.apply(XmlRelation.scala:43) at com.databricks.spark.xml.XmlRelation$$anonfun.apply(XmlRelation.scala:42) at scala.Option.getOrElse(Option.scala:121) at com.databricks.spark.xml.XmlRelation.<init>(XmlRelation.scala:41) at com.databricks.spark.xml.XmlRelation$.apply(XmlRelation.scala:29) at com.databricks.spark.xml.DefaultSource.createRelation(DefaultSource.scala:74) at com.databricks.spark.xml.DefaultSource.createRelation(DefaultSource.scala:52) at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:350) at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:311) at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:297) at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:214) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
我将解决方案总结如下。
包 com.databricks:spark-xml
似乎使用 RDD API 读取 xml 文件。当我们使用 RDD API 访问 Azure Data Lake Storage Gen2 时,我们无法访问使用 spark.conf.set(...)
设置的 Hadoop 配置选项。所以我们应该将代码更新为 spark._jsc.hadoopConfiguration().set("fs.azure.account.key.xxxxx.dfs.core.windows.net", "xxxx==")
。详情请参考here.
此外,您还可以将 Azure Data Lake Storage Gen2 作为文件系统挂载到 Azure databricks 中。
我正在使用 ADLS Gen2,来自 Databricks 笔记本,试图使用 'abfss' 路径处理文件。 我能够很好地读取镶木地板文件,但是当我尝试加载 XML 文件时,我收到错误消息,未找到配置 - 未找到配置 属性 xxx.dfs.core.windows.net。
我没有尝试挂载该文件,但试图了解它是否是 XML 文件的已知限制,因为我能够很好地读取 parquet 文件。
这是我的 XML 库配置 com.databricks:spark-xml_2.11:0.9.0
我根据其他文章尝试了一些方法,但仍然遇到相同的错误。
- 添加了一个新范围以查看它是否是 Databricks 工作区中的范围问题。
- 尝试添加配置 spark.conf.set("fs.azure.account.key.xxxxx.dfs.core.windows.net", "xxxx==")
df = spark.read.format("xml")
.option("rootTag","BookArticle")
.option("inferSchema", "true")
.option("error_bad_lines",True)
.option("mode", "DROPMALFORMED")
.load(abfsssourcename) ##abfsssourcename is the path of the source file name
Exception Details: Py4JJavaError: An error occurred while calling o1113.load.
Configuration property xxxx.dfs.core.windows.net not found. at shaded.databricks.v20180920_b33d810.org.apache.hadoop.fs.azurebfs.AbfsConfiguration.getStorageAccountKey(AbfsConfiguration.java:392) at shaded.databricks.v20180920_b33d810.org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore.initializeClient(AzureBlobFileSystemStore.java:1008) at shaded.databricks.v20180920_b33d810.org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore.<init>(AzureBlobFileSystemStore.java:151) at shaded.databricks.v20180920_b33d810.org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem.initialize(AzureBlobFileSystem.java:106) at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2669) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370) at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295) at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.setInputPaths(FileInputFormat.java:500) at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.setInputPaths(FileInputFormat.java:469) at org.apache.spark.SparkContext$$anonfun$newAPIHadoopFile.apply(SparkContext.scala:1281) at org.apache.spark.SparkContext$$anonfun$newAPIHadoopFile.apply(SparkContext.scala:1269) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112) at org.apache.spark.SparkContext.withScope(SparkContext.scala:820) at org.apache.spark.SparkContext.newAPIHadoopFile(SparkContext.scala:1269) at com.databricks.spark.xml.util.XmlFile$.withCharset(XmlFile.scala:46) at com.databricks.spark.xml.DefaultSource$$anonfun$createRelation.apply(DefaultSource.scala:71) at com.databricks.spark.xml.DefaultSource$$anonfun$createRelation.apply(DefaultSource.scala:71) at com.databricks.spark.xml.XmlRelation$$anonfun.apply(XmlRelation.scala:43) at com.databricks.spark.xml.XmlRelation$$anonfun.apply(XmlRelation.scala:42) at scala.Option.getOrElse(Option.scala:121) at com.databricks.spark.xml.XmlRelation.<init>(XmlRelation.scala:41) at com.databricks.spark.xml.XmlRelation$.apply(XmlRelation.scala:29) at com.databricks.spark.xml.DefaultSource.createRelation(DefaultSource.scala:74) at com.databricks.spark.xml.DefaultSource.createRelation(DefaultSource.scala:52) at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:350) at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:311) at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:297) at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:214) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
我将解决方案总结如下。
包 com.databricks:spark-xml
似乎使用 RDD API 读取 xml 文件。当我们使用 RDD API 访问 Azure Data Lake Storage Gen2 时,我们无法访问使用 spark.conf.set(...)
设置的 Hadoop 配置选项。所以我们应该将代码更新为 spark._jsc.hadoopConfiguration().set("fs.azure.account.key.xxxxx.dfs.core.windows.net", "xxxx==")
。详情请参考here.
此外,您还可以将 Azure Data Lake Storage Gen2 作为文件系统挂载到 Azure databricks 中。