Spark 会话目录失败

Spark Session Catalog Failure

我正在从 Cassandra 数据库批量读取数据,也在使用 Scala Spark 从 Azure EventHubs 流式传输数据 API。

session.read
  .format("org.apache.spark.sql.cassandra")
  .option("keyspace", keyspace)
  .option("table", table)
  .option("pushdown", pushdown)
  .load()

&

session.readStream
  .format("eventhubs")
  .options(eventHubsConf.toMap)
  .load()

一切都运行很好,但现在我无处可去...

User class threw exception: java.lang.NoSuchMethodError: org.apache.spark.sql.catalyst.catalog.SessionCatalog.<init>(Lscala/Function0;Lscala/Function0;Lorg/apache/spark/sql/catalyst/analysis/FunctionRegistry;Lorg/apache/spark/sql/internal/SQLConf;Lorg/apache/hadoop/conf/Configuration;Lorg/apache/spark/sql/catalyst/parser/ParserInterface;Lorg/apache/spark/sql/catalyst/catalog/FunctionResourceLoader;)V
at org.apache.spark.sql.internal.BaseSessionStateBuilder.catalog$lzycompute(BaseSessionStateBuilder.scala:132)
at org.apache.spark.sql.internal.BaseSessionStateBuilder.catalog(BaseSessionStateBuilder.scala:131)
at org.apache.spark.sql.internal.BaseSessionStateBuilder$$anon.<init>(BaseSessionStateBuilder.scala:157)
at org.apache.spark.sql.internal.BaseSessionStateBuilder.analyzer(BaseSessionStateBuilder.scala:157)
at org.apache.spark.sql.internal.BaseSessionStateBuilder$$anonfun$build.apply(BaseSessionStateBuilder.scala:293)
at org.apache.spark.sql.internal.BaseSessionStateBuilder$$anonfun$build.apply(BaseSessionStateBuilder.scala:293)
at org.apache.spark.sql.internal.SessionState.analyzer$lzycompute(SessionState.scala:79)
at org.apache.spark.sql.internal.SessionState.analyzer(SessionState.scala:79)
at org.apache.spark.sql.execution.QueryExecution.analyzed$lzycompute(QueryExecution.scala:57)
at org.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:55)
at org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:47)
at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:74)
at org.apache.spark.sql.SparkSession.baseRelationToDataFrame(SparkSession.scala:428)
at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:233)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:227)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:164)

我不知道到底发生了什么变化,但这是我的依赖项:

ThisBuild / scalaVersion := "2.11.11"
val sparkVersion = "2.4.0"

libraryDependencies ++= Seq(
  "org.apache.logging.log4j" % "log4j-core" % "2.11.1",
  "org.apache.spark" %% "spark-core" % sparkVersion % "provided",
  "org.apache.spark" %% "spark-sql" % sparkVersion  % "provided",
  "org.apache.spark" %% "spark-hive" % sparkVersion % "provided",
  "org.apache.spark" %% "spark-catalyst" % sparkVersion % "provided",
  "org.apache.spark" %% "spark-streaming" % sparkVersion % "provided",
  "com.microsoft.azure" % "azure-eventhubs-spark_2.11" % "2.3.10",
  "com.microsoft.azure" % "azure-eventhubs" % "2.3.0",
  "com.datastax.spark" %% "spark-cassandra-connector" % "2.4.1",
  "org.scala-lang.modules" %% "scala-java8-compat" % "0.9.0",
  "com.twitter" % "jsr166e" % "1.1.0",
  "com.holdenkarau" %% "spark-testing-base" % "2.4.0_0.12.0" % Test,
  "MrPowers" % "spark-fast-tests" % "0.19.2-s_2.11" % Test
)

有人知道吗?

   java.lang.NoSuchMethodError: org.apache.spark.sql.catalyst.catalog.SessionCatalog.<init(  
   scala/Function0;Lscala/Function0; 
   Lorg/apache/spark/sql/catalyst/analysis/FunctionRegistry;
   Lorg/apache/spark/sql/internal/SQLConf;
   Lorg/apache/hadoop/conf/Configuration;
   Lorg/apache/spark/sql/catalyst/parser/ParserInterface;
   Lorg/apache/spark/sql/catalyst/catalog/FunctionResourceLoader;)

向我建议其中一个 ilbraries 是针对与当前运行时路径上的版本不同的 Spark 版本编译的。由于上述方法签名与 Spark 2.4.0 签名匹配,请参阅

https://github.com/apache/spark/blob/v2.4.1/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/SessionCatalog.scala#L56-L63

但不是 Spark 2.3.0 签名。

我的猜测是某处有运行时 Spark 2.3.0 吗?也许您是 运行 使用 Spark-Submit 来自 Spark 2.3.0 安装的应用程序?