当 case class 字段保留 java 带有反引号的关键字时,spark-submit 失败
spark-submit fails when case class fields are reserved java keywords with backticks
我有用于保留关键字的反引号。案例class的一个例子如下:
case class IPC(
`type`: String,
main: Boolean,
normalized: String,
section:String,
`class`: String,
subClass: String,
group:String,
subGroup: String
)
我已将 sparksession 声明如下:
def run(params: SparkApp.Params): Unit ={
val sparkSession = SparkSession.builder.master("local[*]").appName("SparkUsptoParser").getOrCreate()
// val conf = new SparkConf().setAppName("SparkUsptoParser").set("spark.driver.allowMultipleContexts", "true")
val sc = sparkSession.sparkContext
sc.setLogLevel("INFO")
sc.hadoopConfiguration.set("fs.s3a.connection.timeout", "500000")
val (patentParsedRDD, zipPathRDD) = runLocal(sc, params)
logger.info(f"Starting to parse files, appending parquet ${params.outputPath}")
import sparkSession.implicits._
val patentParseDF = patentParsedRDD.toDF().write.mode(SaveMode.Append).parquet(params.outputPath)
logger.info(f"Done parsing and appending parquet")
// save list of processed archive
val logPath = params.outputPath + "/log_%s" format java.time.LocalDate.now.toString
zipPathRDD.coalesce(1).saveAsTextFile(logPath)
logger.info(f"Log file save to $logPath")
}
我正在尝试运行 sbt 的jar 包。但是,我收到错误 "reserved keyword and cannot be used as field name".
使用的命令:
./bin/spark-submit /Users/Projects/uspto-parser/target/scala-2.11/uspto-parser-assembly-0.1.jar
错误:
Exception in thread "main" java.lang.UnsupportedOperationException: `class` is a reserved keyword and cannot be used as field name
- array element class: "usptoparser.IPC"
- field (class: "scala.collection.immutable.List", name: "ipcs")
- root class: "usptoparser.PatentDocument"
at org.apache.spark.sql.catalyst.ScalaReflection$$anonfun$org$apache$spark$sql$catalyst$ScalaReflection$$serializerFor$$anonfun.apply(ScalaReflection.scala:627)
at org.apache.spark.sql.catalyst.ScalaReflection$$anonfun$org$apache$spark$sql$catalyst$ScalaReflection$$serializerFor$$anonfun.apply(ScalaReflection.scala:625)
at scala.collection.TraversableLike$$anonfun$flatMap.apply(TraversableLike.scala:241)
at scala.collection.TraversableLike$$anonfun$flatMap.apply(TraversableLike.scala:241)
at scala.collection.immutable.List.foreach(List.scala:381)
at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:241)
at scala.collection.immutable.List.flatMap(List.scala:344)
版本:
sparkVersion := "2.3.0"
sbt.version = 0.13.8
scalaVersion := "2.11.2"
您可以通过使用非保留 Java 关键字的字段名称然后使用 'as':
重命名来解决此问题
scala> case class IPC(name: String, `class`: String)
defined class IPC
scala> val x = Seq(IPC("a", "b"), IPC("d", "e")).toDF
java.lang.UnsupportedOperationException: `class` is a reserved keyword and cannot be used as field name
- root class: "IPC"
at org.apache.spark.sql.catalyst.ScalaReflection$$anonfun$org$apache$spark$sql$catalyst$ScalaReflection$$serializerFor$$anonfun.apply(ScalaReflection.scala:627)
...
scala> case class IPC(name: String, clazz: String)
defined class IPC
scala> val x = Seq(IPC("a", "b"), IPC("d", "e")).toDF
x: org.apache.spark.sql.DataFrame = [name: string, clazz: string]
scala> x.select($"clazz".as("class")).show(false)
+-----+
|class|
+-----+
|b |
|e |
+-----+
scala>
我有用于保留关键字的反引号。案例class的一个例子如下:
case class IPC(
`type`: String,
main: Boolean,
normalized: String,
section:String,
`class`: String,
subClass: String,
group:String,
subGroup: String
)
我已将 sparksession 声明如下:
def run(params: SparkApp.Params): Unit ={
val sparkSession = SparkSession.builder.master("local[*]").appName("SparkUsptoParser").getOrCreate()
// val conf = new SparkConf().setAppName("SparkUsptoParser").set("spark.driver.allowMultipleContexts", "true")
val sc = sparkSession.sparkContext
sc.setLogLevel("INFO")
sc.hadoopConfiguration.set("fs.s3a.connection.timeout", "500000")
val (patentParsedRDD, zipPathRDD) = runLocal(sc, params)
logger.info(f"Starting to parse files, appending parquet ${params.outputPath}")
import sparkSession.implicits._
val patentParseDF = patentParsedRDD.toDF().write.mode(SaveMode.Append).parquet(params.outputPath)
logger.info(f"Done parsing and appending parquet")
// save list of processed archive
val logPath = params.outputPath + "/log_%s" format java.time.LocalDate.now.toString
zipPathRDD.coalesce(1).saveAsTextFile(logPath)
logger.info(f"Log file save to $logPath")
}
我正在尝试运行 sbt 的jar 包。但是,我收到错误 "reserved keyword and cannot be used as field name".
使用的命令:
./bin/spark-submit /Users/Projects/uspto-parser/target/scala-2.11/uspto-parser-assembly-0.1.jar
错误:
Exception in thread "main" java.lang.UnsupportedOperationException: `class` is a reserved keyword and cannot be used as field name
- array element class: "usptoparser.IPC"
- field (class: "scala.collection.immutable.List", name: "ipcs")
- root class: "usptoparser.PatentDocument"
at org.apache.spark.sql.catalyst.ScalaReflection$$anonfun$org$apache$spark$sql$catalyst$ScalaReflection$$serializerFor$$anonfun.apply(ScalaReflection.scala:627)
at org.apache.spark.sql.catalyst.ScalaReflection$$anonfun$org$apache$spark$sql$catalyst$ScalaReflection$$serializerFor$$anonfun.apply(ScalaReflection.scala:625)
at scala.collection.TraversableLike$$anonfun$flatMap.apply(TraversableLike.scala:241)
at scala.collection.TraversableLike$$anonfun$flatMap.apply(TraversableLike.scala:241)
at scala.collection.immutable.List.foreach(List.scala:381)
at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:241)
at scala.collection.immutable.List.flatMap(List.scala:344)
版本:
sparkVersion := "2.3.0"
sbt.version = 0.13.8
scalaVersion := "2.11.2"
您可以通过使用非保留 Java 关键字的字段名称然后使用 'as':
重命名来解决此问题scala> case class IPC(name: String, `class`: String)
defined class IPC
scala> val x = Seq(IPC("a", "b"), IPC("d", "e")).toDF
java.lang.UnsupportedOperationException: `class` is a reserved keyword and cannot be used as field name
- root class: "IPC"
at org.apache.spark.sql.catalyst.ScalaReflection$$anonfun$org$apache$spark$sql$catalyst$ScalaReflection$$serializerFor$$anonfun.apply(ScalaReflection.scala:627)
...
scala> case class IPC(name: String, clazz: String)
defined class IPC
scala> val x = Seq(IPC("a", "b"), IPC("d", "e")).toDF
x: org.apache.spark.sql.DataFrame = [name: string, clazz: string]
scala> x.select($"clazz".as("class")).show(false)
+-----+
|class|
+-----+
|b |
|e |
+-----+
scala>