带有 DataFrame API 的 Apache Spark MLlib 在 createDataFrame() 或 read().csv(...) 时给出 java.net.URISyntaxException
Apache Spark MLlib with DataFrame API gives java.net.URISyntaxException when createDataFrame() or read().csv(...)
在独立应用程序中(在 java8、Windows 10 上运行,spark-xxx_2.11:2.0.0 作为 jar 依赖项)下一个代码给出错误:
/* this: */
Dataset<Row> logData = spark_session.createDataFrame(Arrays.asList(
new LabeledPoint(1.0, Vectors.dense(4.9,3,1.4,0.2)),
new LabeledPoint(1.0, Vectors.dense(4.7,3.2,1.3,0.2))
), LabeledPoint.class);
/* or this: */
/* logFile: "C:\files\project\file.csv", "C:\files\project\file.csv",
"C:/files/project/file.csv", "file:/C:/files/project/file.csv",
"file:///C:/files/project/file.csv", "/file.csv" */
Dataset<Row> logData = spark_session.read().csv(logFile);
异常:
java.lang.IllegalArgumentException: java.net.URISyntaxException: Relative path in absolute URI: file:C:/files/project/spark-warehouse
at org.apache.hadoop.fs.Path.initialize(Path.java:206)
at org.apache.hadoop.fs.Path.<init>(Path.java:172)
at org.apache.spark.sql.catalyst.catalog.SessionCatalog.makeQualifiedPath(SessionCatalog.scala:114)
at org.apache.spark.sql.catalyst.catalog.SessionCatalog.createDatabase(SessionCatalog.scala:145)
at org.apache.spark.sql.catalyst.catalog.SessionCatalog.<init>(SessionCatalog.scala:89)
at org.apache.spark.sql.internal.SessionState.catalog$lzycompute(SessionState.scala:95)
at org.apache.spark.sql.internal.SessionState.catalog(SessionState.scala:95)
at org.apache.spark.sql.internal.SessionState$$anon.<init>(SessionState.scala:112)
at org.apache.spark.sql.internal.SessionState.analyzer$lzycompute(SessionState.scala:112)
at org.apache.spark.sql.internal.SessionState.analyzer(SessionState.scala:111)
at org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:49)
at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:64)
at org.apache.spark.sql.SparkSession.createDataFrame(SparkSession.scala:373)
at <call in my line of code>
如何从 java 代码将 csv 文件加载到 Dataset<Row>
?
文件系统路径存在一些问题。参见 jira https://issues.apache.org/jira/browse/SPARK-15899。对于解决方法,您可以在 SparkSession 中设置 "spark.sql.warehouse.dir",如下所示。
SparkSession spark = SparkSession
.builder()
.appName("JavaALSExample")
.config("spark.sql.warehouse.dir", "/file:C:/temp")
.getOrCreate();
在独立应用程序中(在 java8、Windows 10 上运行,spark-xxx_2.11:2.0.0 作为 jar 依赖项)下一个代码给出错误:
/* this: */
Dataset<Row> logData = spark_session.createDataFrame(Arrays.asList(
new LabeledPoint(1.0, Vectors.dense(4.9,3,1.4,0.2)),
new LabeledPoint(1.0, Vectors.dense(4.7,3.2,1.3,0.2))
), LabeledPoint.class);
/* or this: */
/* logFile: "C:\files\project\file.csv", "C:\files\project\file.csv",
"C:/files/project/file.csv", "file:/C:/files/project/file.csv",
"file:///C:/files/project/file.csv", "/file.csv" */
Dataset<Row> logData = spark_session.read().csv(logFile);
异常:
java.lang.IllegalArgumentException: java.net.URISyntaxException: Relative path in absolute URI: file:C:/files/project/spark-warehouse
at org.apache.hadoop.fs.Path.initialize(Path.java:206)
at org.apache.hadoop.fs.Path.<init>(Path.java:172)
at org.apache.spark.sql.catalyst.catalog.SessionCatalog.makeQualifiedPath(SessionCatalog.scala:114)
at org.apache.spark.sql.catalyst.catalog.SessionCatalog.createDatabase(SessionCatalog.scala:145)
at org.apache.spark.sql.catalyst.catalog.SessionCatalog.<init>(SessionCatalog.scala:89)
at org.apache.spark.sql.internal.SessionState.catalog$lzycompute(SessionState.scala:95)
at org.apache.spark.sql.internal.SessionState.catalog(SessionState.scala:95)
at org.apache.spark.sql.internal.SessionState$$anon.<init>(SessionState.scala:112)
at org.apache.spark.sql.internal.SessionState.analyzer$lzycompute(SessionState.scala:112)
at org.apache.spark.sql.internal.SessionState.analyzer(SessionState.scala:111)
at org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:49)
at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:64)
at org.apache.spark.sql.SparkSession.createDataFrame(SparkSession.scala:373)
at <call in my line of code>
如何从 java 代码将 csv 文件加载到 Dataset<Row>
?
文件系统路径存在一些问题。参见 jira https://issues.apache.org/jira/browse/SPARK-15899。对于解决方法,您可以在 SparkSession 中设置 "spark.sql.warehouse.dir",如下所示。
SparkSession spark = SparkSession
.builder()
.appName("JavaALSExample")
.config("spark.sql.warehouse.dir", "/file:C:/temp")
.getOrCreate();