具有 spark.read.text 索引 3 处的预期特定于方案的部分的 Spark 2.0:s3:错误

Spark 2.0 with spark.read.text Expected scheme-specific part at index 3: s3: error

我 运行 遇到了 spark 2.0 的奇怪问题,使用 sparksession 加载文本文件。目前我的 spark 配置如下:

val sparkConf = new SparkConf().setAppName("name-here")
sparkConf.registerKryoClasses(Array(Class.forName("org.apache.hadoop.io.LongWritable"), Class.forName("org.apache.hadoop.io.Text")))
sparkConf.set("spark.serializer", "org.apache.spark.serializer.KryoSerializer")
val spark = SparkSession.builder()
    .config(sparkConf)
    .getOrCreate()
spark.sparkContext.hadoopConfiguration.set("fs.s3a.impl", "org.apache.hadoop.fs.s3a.S3AFileSystem")
spark.sparkContext.hadoopConfiguration.set("fs.s3a.enableServerSideEncryption", "true")
spark.sparkContext.hadoopConfiguration.set("mapreduce.fileoutputcommitter.algorithm.version", "2")

如果我通过 rdd 加载一个 s3a 文件,它工作正常。但是,如果我尝试使用类似的东西:

    val blah = SparkConfig.spark.read.text("s3a://bucket-name/*/*.txt")
        .select(input_file_name, col("value"))
        .drop("value")
        .distinct()
    val x = blah.collect()
    println(blah.head().get(0))
    println(x.size)

我收到一条异常消息:java.net.URISyntaxException: Expected scheme-specific part at index 3: s3:

我是否需要为 sqlcontext 或 sparksession 添加一些额外的 s3a 配置?我没有找到任何指定此问题的问题或在线资源。奇怪的是,作业似乎运行了 10 分钟,但随后因出现此异常而失败。同样,使用相同的存储桶和所有内容,定期加载 rdd 没有问题。

另一件奇怪的事情是它抱怨的是 s3 而不是 s3a。我已经三次检查了我的前缀,它总是说 s3a。

编辑:检查了 s3a 和 s3,两者抛出相同的异常。

17/04/06 21:29:14 ERROR ApplicationMaster: User class threw exception: 
java.lang.IllegalArgumentException: java.net.URISyntaxException: 
Expected scheme-specific part at index 3: s3:
java.lang.IllegalArgumentException: java.net.URISyntaxException: 
Expected scheme-specific part at index 3: s3:
at org.apache.hadoop.fs.Path.initialize(Path.java:205)
at org.apache.hadoop.fs.Path.<init>(Path.java:171)
at org.apache.hadoop.fs.Path.<init>(Path.java:93)
at org.apache.hadoop.fs.Globber.glob(Globber.java:240)
at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1732)
at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1713)
at org.apache.spark.deploy.SparkHadoopUtil.globPath(SparkHadoopUtil.scala:237)
at org.apache.spark.deploy.SparkHadoopUtil.globPathIfNecessary(SparkHadoopUtil.scala:243)
at org.apache.spark.sql.execution.datasources.DataSource$$anonfun.apply(DataSource.scala:374)
at org.apache.spark.sql.execution.datasources.DataSource$$anonfun.apply(DataSource.scala:370)
at scala.collection.TraversableLike$$anonfun$flatMap.apply(TraversableLike.scala:241)
at scala.collection.TraversableLike$$anonfun$flatMap.apply(TraversableLike.scala:241)
at scala.collection.immutable.List.foreach(List.scala:381)
at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:241)
at scala.collection.immutable.List.flatMap(List.scala:344)
at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:370)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:152)
at org.apache.spark.sql.DataFrameReader.text(DataFrameReader.scala:506)
at org.apache.spark.sql.DataFrameReader.text(DataFrameReader.scala:486)
at com.omitted.omitted.jobs.Omitted$.doThings(Omitted.scala:18)
at com.omitted.omitted.jobs.Omitted$.main(Omitted.scala:93)
at com.omitted.omitted.jobs.Omitted.main(Omitted.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.yarn.ApplicationMaster$$anon.run(ApplicationMaster.scala:637)
Caused by: java.net.URISyntaxException: Expected scheme-specific part 
at index 3: s3:
at java.net.URI$Parser.fail(URI.java:2848)
at java.net.URI$Parser.failExpecting(URI.java:2854)
at java.net.URI$Parser.parse(URI.java:3057)
at java.net.URI.<init>(URI.java:746)
at org.apache.hadoop.fs.Path.initialize(Path.java:202)
... 26 more
17/04/06 21:29:14 INFO ApplicationMaster: Final app status: FAILED, 
exitCode: 15, (reason: User class threw exception: 
java.lang.IllegalArgumentException: java.net.URISyntaxException: 
Expected scheme-specific part at index 3: s3:)

这应该有效。

  • 在您的 CP 上获取正确的 JAR(Spark with Hadoop 2.7,匹配的 hadoop-aws JAR,aws-java-sdk-1.7.4.jar(正是这个版本)和 joda-time -2.9.3.jar(或更高版本)
  • 您不需要设置 fs.s3a.impl 值,因为它已在 hadoop 默认设置中完成。如果您确实发现自己这样做,则表明存在问题。

完整的堆栈跟踪是什么?