Spark 2.0读取本地parquet文件

Reading local parquet files in Spark 2.0

在 spark 1.6.2 中,我可以通过非常简单的操作来读取本地镶木地板文件:

SQLContext sqlContext = new SQLContext(new SparkContext("local[*]", "Java Spark SQL Example"));
DataFrame parquet = sqlContext.read().parquet("file:///C:/files/myfile.csv.parquet");
parquet.show(20);

我正在尝试升级到 Spark 2.0.0 并通过 运行ning 实现相同的目标:

SparkSession spark = SparkSession.builder().appName("Java Spark SQL Example").master("local[*]").getOrCreate();
Dataset<Row> parquet = spark.read().parquet("file:///C:/files/myfile.csv.parquet");
parquet.show(20);

这是 运行ning on Windows,来自 intellij(Java 项目),我目前没有使用 hadoop 集群(稍后会出现,但目前我只是想了解正确的处理逻辑并熟悉 API)。

不幸的是运行使用spark 2.0时,代码出现异常:

Exception in thread "main" java.lang.IllegalArgumentException: java.net.URISyntaxException: Relative path in absolute URI: file:C:/[my intellij project path]/spark-warehouse
at org.apache.hadoop.fs.Path.initialize(Path.java:206)
at org.apache.hadoop.fs.Path.<init>(Path.java:172)
at org.apache.spark.sql.catalyst.catalog.SessionCatalog.makeQualifiedPath(SessionCatalog.scala:114)
at org.apache.spark.sql.catalyst.catalog.SessionCatalog.createDatabase(SessionCatalog.scala:145)
at org.apache.spark.sql.catalyst.catalog.SessionCatalog.<init>(SessionCatalog.scala:89)
at org.apache.spark.sql.internal.SessionState.catalog$lzycompute(SessionState.scala:95)
at org.apache.spark.sql.internal.SessionState.catalog(SessionState.scala:95)
at org.apache.spark.sql.internal.SessionState$$anon.<init>(SessionState.scala:112)
at org.apache.spark.sql.internal.SessionState.analyzer$lzycompute(SessionState.scala:112)
at org.apache.spark.sql.internal.SessionState.analyzer(SessionState.scala:111)
at org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:49)
at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:64)
at org.apache.spark.sql.SparkSession.baseRelationToDataFrame(SparkSession.scala:382)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:143)
at org.apache.spark.sql.DataFrameReader.parquet(DataFrameReader.scala:427)
at org.apache.spark.sql.DataFrameReader.parquet(DataFrameReader.scala:411)
at lili.spark.ParquetTest.main(ParquetTest.java:15)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:144)
Caused by: java.net.URISyntaxException: Relative path in absolute URI: file:C:/[my intellij project path]/spark-warehouse
at java.net.URI.checkPath(URI.java:1823)
at java.net.URI.<init>(URI.java:745)
at org.apache.hadoop.fs.Path.initialize(Path.java:203)
... 21 more

我不知道它为什么要尝试触及我的项目目录中的任何内容 - 是否有任何我遗漏的配置在 spark 1.6.2 中是明智的默认设置,但在 2.0 中不再是这种情况?换句话说,在 windows 上读取 spark 2.0 中本地镶木地板文件的最简单方法是什么?

您似乎 运行 喜欢 SPARK-15893。 Spark 开发人员将文件读取从 1.6.2 更改为 2.0.0。 从 JIRA 的评论中,您应该转到 conf\spark-defaults.conf 文件并添加:

"spark.sql.warehouse.dir=file:///C:/Experiment/spark-2.0.0-bin-without-hadoop/spark-warehouse"

然后您应该能够像这样加载 parquet 文件:

DataFrame parquet = sqlContext.read().parquet("C:/files/myfile.csv.parquet");