在集群部署模式下找不到 Spark 文件

Spark files not found in cluster deploy mode

我正在尝试通过在 EMR 集群主节点中发出 运行 集群部署模式下的 Spark 作业:

spark-submit --master yarn \
--deploy-mode cluster \
--files truststore.jks,kafka.properties,program.properties \ 
--class com.someOrg.somePackage.someClass s3://someBucket/someJar.jar kafka.properties program.properties

我收到以下错误,指出在 Spark 执行程序工作目录中找不到该文件:

//This is me printing the Spark executor working directory through SparkFiles.getRootDirectory()
20/07/03 17:53:40 INFO Program$: This is the path: /mnt1/yarn/usercache/hadoop/appcache/application_1593796195404_0011/spark-46b7fe4d-ba16-452a-a5a7-fbbab740bf1e/userFiles-9c6d4cae-2261-43e8-8046-e49683f9fd3e
        
//This is me trying to list the content for that working directory, which turns out empty.
20/07/03 17:53:40 INFO Program$: This is the content for the path:
                
//This is me getting the error:
    20/07/03 17:53:40 ERROR ApplicationMaster: User class threw exception: java.nio.file.NoSuchFileException: /mnt1/yarn/usercache/hadoop/appcache/application_1593796195404_0011/spark-46b7fe4d-ba16-452a-a5a7-fbbab740bf1e/userFiles-9c6d4cae-2261-43e8-8046-e49683f9fd3e/program.properties
                java.nio.file.NoSuchFileException: /mnt1/yarn/usercache/hadoop/appcache/application_1593796195404_0011/spark-46b7fe4d-ba16-452a-a5a7-fbbab740bf1e/userFiles-9c6d4cae-2261-43e8-8046-e49683f9fd3e/program.properties
                    at sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)
                    at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
                    at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
                    at sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:214)
                    at java.nio.file.Files.newByteChannel(Files.java:361)
                    at java.nio.file.Files.newByteChannel(Files.java:407)
                    at java.nio.file.spi.FileSystemProvider.newInputStream(FileSystemProvider.java:384)
                    at java.nio.file.Files.newInputStream(Files.java:152)
                    at ccom.someOrg.somePackage.someHelpers$.loadPropertiesFromFile(Helpers.scala:142)
                    at com.someOrg.somePackage.someClass$.main(someClass.scala:33)
                    at com.someOrg.somePackage.someClass.main(someClass.scala)
                    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
                    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
                    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
                    at java.lang.reflect.Method.invoke(Method.java:498)
                    at org.apache.spark.deploy.yarn.ApplicationMaster$$anon.run(ApplicationMaster.scala:685)

这是我用来尝试读取作为参数传递的属性文件的函数:

def loadPropertiesFromFile(path: String): Properties = {
    val inputStream = Files.newInputStream(Paths.get(path), StandardOpenOption.READ)
    val properties  = new Properties()
    properties.load(inputStream)
    properties
  }

调用为:

val spark = SparkSession.builder().getOrCreate()
import spark.implicits._
val kafkaProperties = loadPropertiesFromFile(SparkFiles.get(args(1)))
val programProperties = loadPropertiesFromFile(SparkFiles.get(args(2)))
//Also tried loadPropertiesFromFile(args({1,2}))

程序在客户端部署模式下按预期运行:

spark-submit --master yarn \
--deploy-mode client \
--packages org.apache.spark:spark-sql-kafka-0-10_2.11:2.4.5 \
--files truststore.jks program.jar com.someOrg.somePackage.someClass kafka.properties program.properties

这发生在 Spark 2.4.5 / EMR 5.30.1 中。

此外,当我尝试将此作业配置为 EMR 步骤时,它甚至无法在客户端模式下运行。关于资源文件如何通过 EMR 中的 --files 选项 managed/persisted/available 传递的任何线索?

选项 1:将这些文件放入 s3 并传递 s3 路径。 选项 2:将这些文件复制到特定位置的每个节点(使用 bootstrap)并传递文件的绝对路径。

根据以上评论的建议解决:

spark-submit --master yarn \
--deploy-mode cluster \
--packages org.apache.spark:spark-sql-kafka-0-10_2.11:2.4.5 \
--files s3://someBucket/resources/truststore.jks,s3://someBucket/resources/kafka.properties,s3://someBucket/resources/program.properties \
--class com.someOrg.someClass.someMain \
s3://someBucket/resources/program.jar kafka.properties program.properties

我之前假设在 cluster 部署模式下 --files 下的文件也与部署到工作节点的驱动程序一起运送(因此在工作目录中可用),如果可以从发出spark-submit的机器。

底线:无论您从何处发出 spark-submit 以及该机器中文件的可用性,在集群模式下,您都必须确保可以从每个工作节点访问文件。

它现在正在通过将文件位置指向 S3 来工作。

谢谢大家!