Amazon Deequ (Spark + Scala ) - java.lang.NoSuchMethodError: 'scala.Option org.apache.spark.sql.catalyst.expressions.aggregate.AggregateFunction.toAgg

Amazon Deequ (Spark + Scala ) - java.lang.NoSuchMethodError: 'scala.Option org.apache.spark.sql.catalyst.expressions.aggregate.AggregateFunction.toAgg

Spark 版本 - 3.0.1 亚马逊 Deequ 版本 - deequ-2.0.0-spark-3.1.jar

Im 运行 我本地的 spark shell 中的以下代码:

import com.amazon.deequ.analyzers.runners.{AnalysisRunner, AnalyzerContext}   
import com.amazon.deequ.analyzers.runners.AnalyzerContext.successMetricsAsDataFrame  
import com.amazon.deequ.analyzers.{Compliance, Correlation, Size, Completeness, Mean, 
ApproxCountDistinct, Maximum, Minimum, Entropy}  

import com.amazon.deequ.analyzers.{Compliance, Correlation, Size, Completeness, Mean, 
ApproxCountDistinct, Maximum, Minimum, Entropy}

val analysisResult: AnalyzerContext = {AnalysisRunner.onData(datasourcedf).addAnalyzer(Size()).addAnalyzer(Completeness("customerNumber")).addAnalyzer(ApproxCountDistinct("customerNumber")).addAnalyzer(Minimum("creditLimit")).addAnalyzer(Mean("creditLimit")).addAnalyzer(Maximum("creditLimit")).addAnalyzer(Entropy("creditLimit")).**run()**}

错误:

    java.lang.NoSuchMethodError: 'scala.Option 
org.apache.spark.sql.catalyst.expressions.aggregate.AggregateFunction.toAggregateExpression$default()'
at org.apache.spark.sql.DeequFunctions$.withAggregateFunction(DeequFunctions.scala:31)
at org.apache.spark.sql.DeequFunctions$.stateful_approx_count_distinct(DeequFunctions.scala:60)
at com.amazon.deequ.analyzers.ApproxCountDistinct.aggregationFunctions(ApproxCountDistinct.scala:52)
at com.amazon.deequ.analyzers.runners.AnalysisRunner$.$anonfun$runScanningAnalyzers(AnalysisRunner.scala:319)
at scala.collection.TraversableLike.$anonfun$flatMap(TraversableLike.scala:245)
at scala.collection.immutable.List.foreach(List.scala:392)
at scala.collection.TraversableLike.flatMap(TraversableLike.scala:245)
at scala.collection.TraversableLike.flatMap$(TraversableLike.scala:242)
at scala.collection.immutable.List.flatMap(List.scala:355)
at com.amazon.deequ.analyzers.runners.AnalysisRunner$.liftedTree1(AnalysisRunner.scala:319)
at com.amazon.deequ.analyzers.runners.AnalysisRunner$.runScanningAnalyzers(AnalysisRunner.scala:318)
at com.amazon.deequ.analyzers.runners.AnalysisRunner$.doAnalysisRun(AnalysisRunner.scala:167)
at com.amazon.deequ.analyzers.runners.AnalysisRunBuilder.run(AnalysisRunBuilder.scala:110)
... 63 elided

谁能告诉我如何解决这个问题

您不能将 Deeque 版本 2.0.0 与 Spark 3.0 一起使用,因为由于 Spark 内部结构的变化,它是二进制不兼容的。对于 Spark 3.0,您需要使用版本 1.2.2-spark-3.0