Spark 2.0 S3 元数据加载在读取多个数据帧时挂起

Spark 2.0 S3 metadata load hangs on multiple dataframe read

我们目前正在评估从 spark 1.6 升级到 spark 2.0,但我们有一个非常奇怪的错误阻止我们进行此转换。

我们的一个要求是从 S3 读取多个数据点并将它们联合在一起。当我们加载 50 个数据集时,没有问题。然而,在第 51 个数据集加载时,一切都在寻找元数据时挂起。这不是间歇性的,而是持续发生的。

数据格式为avro容器,我们使用的是spark-avro 3.0.0。

这个问题有答案吗?


<<main thread dump>>
java.lang.Thread.sleep(Native Method)
com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.http.AmazonHttpClient.doPauseBeforeRetry(AmazonHttpClient.java:1475)
com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.http.AmazonHttpClient.pauseBeforeRetry(AmazonHttpClient.java:1439)
com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:794)
com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:607)
com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.http.AmazonHttpClient.doExecute(AmazonHttpClient.java:376)
com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.http.AmazonHttpClient.executeWithTimer(AmazonHttpClient.java:338)
com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:287)
com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3826)
com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.services.s3.AmazonS3Client.getObjectMetadata(AmazonS3Client.java:1015)
com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.services.s3.AmazonS3Client.getObjectMetadata(AmazonS3Client.java:991)
com.amazon.ws.emr.hadoop.fs.s3n.Jets3tNativeFileSystemStore.retrieveMetadata(Jets3tNativeFileSystemStore.java:212)
sun.reflect.GeneratedMethodAccessor19.invoke(Unknown Source)
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
java.lang.reflect.Method.invoke(Method.java:498)
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
com.sun.proxy.$Proxy36.retrieveMetadata(Unknown Source)
com.amazon.ws.emr.hadoop.fs.s3n.S3NativeFileSystem.getFileStatus(S3NativeFileSystem.java:780)
org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1428)
com.amazon.ws.emr.hadoop.fs.EmrFileSystem.exists(EmrFileSystem.java:313)
org.apache.spark.sql.execution.datasources.DataSource.hasMetadata(DataSource.scala:289)
org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:324)
org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:149)
org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:132)

看来 avro-spark 通过不释放连接来耗尽连接池。

https://github.com/databricks/spark-avro/issues/156