Spark 3.1.2 的 hadoop-aws 和 aws-java-sdk 版本兼容性

hadoop-aws and aws-java-sdk version compatibility for Spark 3.1.2

我 运行 将使用 hadoop-awsaws-java-sdk-s3 的 Spark 项目更新为带有 Scala 2.12.15 的 Spark 3.1.2 以便 运行电子病历 6.5.0.

我检查了 EMR release notes 说明了这些版本:

我目前在本地运行ning spark 以确保以上版本的兼容性并得到以下错误:

 java.lang.NoSuchFieldError: SERVICE_ID
    at com.amazonaws.services.s3.AmazonS3Client.createRequest(AmazonS3Client.java:4925)
    at com.amazonaws.services.s3.AmazonS3Client.createRequest(AmazonS3Client.java:4911)
    at com.amazonaws.services.s3.AmazonS3Client.headBucket(AmazonS3Client.java:1441)
    at com.amazonaws.services.s3.AmazonS3Client.doesBucketExist(AmazonS3Client.java:1381)
    at org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$verifyBucketExists(S3AFileSystem.java:381)
    at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:109)
    at org.apache.hadoop.fs.s3a.Invoker.lambda$retry(Invoker.java:265)
    at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:322)
    at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:261)
    at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:236)
    at org.apache.hadoop.fs.s3a.S3AFileSystem.verifyBucketExists(S3AFileSystem.java:380)
    at org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:314)
    at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3303)
    at org.apache.hadoop.fs.FileSystem.access0(FileSystem.java:124)
    at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3352)
    at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3320)
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:479)
    at org.apache.hadoop.fs.Path.getFileSystem(Path.java:365)
    at org.apache.spark.sql.execution.streaming.FileStreamSink$.hasMetadata(FileStreamSink.scala:46)

我也试过检查 aws-java-sdk hadoop-aws 的版本是基于。 Hadoop-aws 3.2.1 依赖于 aws-java-sdk 1.11.375 可以找到 here

但是这些版本会导致不同的错误:

 'org.apache.http.client.methods.HttpRequestBase com.amazonaws.http.HttpResponse.getHttpRequest()'
    at com.amazonaws.services.s3.internal.S3ObjectResponseHandler.handle(S3ObjectResponseHandler.java:57)
    at com.amazonaws.services.s3.internal.S3ObjectResponseHandler.handle(S3ObjectResponseHandler.java:29)
    at com.amazonaws.http.response.AwsResponseHandlerAdapter.handle(AwsResponseHandlerAdapter.java:70)
    at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleResponse(AmazonHttpClient.java:1555)
    at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1272)
    at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1058)
    at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:743)
    at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:717)
    at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:699)
    at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access0(AmazonHttpClient.java:667)
    at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:649)
    at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:513)
    at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4368)
    at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4315)
    at com.amazonaws.services.s3.AmazonS3Client.getObject(AmazonS3Client.java:1416)
    at org.apache.hadoop.fs.s3a.S3AInputStream.lambda$reopen[=12=](S3AInputStream.java:196)
    at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:109)
    at org.apache.hadoop.fs.s3a.S3AInputStream.reopen(S3AInputStream.java:195)
    at org.apache.hadoop.fs.s3a.S3AInputStream.lambda$lazySeek(S3AInputStream.java:346)
    at org.apache.hadoop.fs.s3a.Invoker.lambda$retry(Invoker.java:195)
    at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:109)
    at org.apache.hadoop.fs.s3a.Invoker.lambda$retry(Invoker.java:265)
    at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:322)
    at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:261)
    at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:193)
    at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:215)
    at org.apache.hadoop.fs.s3a.S3AInputStream.lazySeek(S3AInputStream.java:339)
    at org.apache.hadoop.fs.s3a.S3AInputStream.read(S3AInputStream.java:451)
    at java.base/java.io.DataInputStream.read(DataInputStream.java:149)

build.sbt:

scalaVersion := "2.12.15"

libraryDependencies ++= Seq(
  "org.apache.spark" %% "spark-core" % "3.1.2",
  "org.apache.spark" %% "spark-sql"  % "3.1.2",
  "com.fasterxml.jackson.core"    % "jackson-databind"     % "2.12.2",
  "com.fasterxml.jackson.module" %% "jackson-module-scala" % "2.12.2",
  "org.apache.hadoop"             % "hadoop-client"        % "3.2.1",
  "org.apache.hadoop"             % "hadoop-aws"           % "3.2.1",
  "com.amazonaws"                 % "aws-java-sdk-s3"      % "1.11.375"
)

这些库的正确版本应该是什么?

EMR 文档说“使用我们自己的 s3: 连接器”...如果您 运行 使用 EMR,请执行此操作。

你应该在其他安装上使用 s3a,包括本地安装

那里

  • mvnrepository 了解依赖关系的好方法
    * 这是 hadoop-aws 的摘要,尽管其 3.2.1 声明遗漏了所有依赖项。它是 1.11.375
  • 您看到的堆栈跟踪来自尝试使 aws s3 sdk、核心 sdk、jackson 和 httpclient 同步。
  • 最容易放弃并直接使用完整的 aws-java-sdk-bundle,它具有一组一致的 aws 工件和依赖项的私有版本。它很大 - 但消除了与传递依赖相关的所有问题

事实证明,向 aws-java-sdk-core 添加依赖项明确地解决了我的问题,如前所述 here。这样我就可以避免沉重的 aws sdk 捆绑。

build.sbt:

scalaVersion := "2.12.15"

libraryDependencies ++= Seq(
  "org.apache.spark"             %% "spark-core"           % "3.1.2",
  "org.apache.spark"             %% "spark-sql"            % "3.1.2",
  "com.fasterxml.jackson.core"    % "jackson-databind"     % "2.12.2",
  "com.fasterxml.jackson.module" %% "jackson-module-scala" % "2.12.2",
  "org.apache.hadoop"             % "hadoop-client"        % "3.2.1",
  "org.apache.hadoop"             % "hadoop-aws"           % "3.2.1",
  "com.amazonaws"                 % "aws-java-sdk-s3"      % "1.11.375",
  "com.amazonaws"                 % "aws-java-sdk-core"    % "1.11.375"
)