Yarn Cluster java.io.FileNotFoundException 上的 Spark Job 运行:文件不存在,即使文件存在于主节点上
Spark Job running on Yarn Cluster java.io.FileNotFoundException: File does not exits , eventhough the file exits on the master node
我是 Spark 的新手。我尝试搜索但找不到合适的解决方案。我已经在两个盒子(一个主节点和另一个工作节点)上安装了 hadoop 2.7.2 我已经按照以下 link http://javadev.org/docs/hadoop/centos/6/installation/multi-node-installation-on-centos-6-non-sucure-mode/ 设置了集群
我是 运行 hadoop 和 spark 应用程序作为 root 用户测试集群。
我已经在主节点上安装了 spark,并且 spark 启动时没有任何错误。但是,当我使用 spark submit 提交作业时,即使文件存在于 error.I 中的主节点中的同一位置,我也会收到 File Not Found 异常,我在 Spark Submit 命令下执行,请查找日志输出在命令下方。
/bin/spark-submit --class com.test.Engine --master yarn --deploy-mode cluster /app/spark-test.jar
16/04/21 19:16:13 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
16/04/21 19:16:13 INFO RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
16/04/21 19:16:14 INFO Client: Requesting a new application from cluster with 1 NodeManagers
16/04/21 19:16:14 INFO Client: Verifying our application has not requested more than the maximum memory capability of the cluster (8192 MB per container)
16/04/21 19:16:14 INFO Client: Will allocate AM container, with 1408 MB memory including 384 MB overhead
16/04/21 19:16:14 INFO Client: Setting up container launch context for our AM
16/04/21 19:16:14 INFO Client: Setting up the launch environment for our AM container
16/04/21 19:16:14 INFO Client: Preparing resources for our AM container
16/04/21 19:16:14 INFO Client: Source and destination file systems are the same. Not copying file:/mi/spark/lib/spark-assembly-1.6.1-hadoop2.6.0.jar
16/04/21 19:16:14 INFO Client: Source and destination file systems are the same. Not copying file:/app/spark-test.jar
16/04/21 19:16:14 INFO Client: Source and destination file systems are the same. Not copying file:/tmp/spark-120aeddc-0f87-4411-9400-22ba01096249/__spark_conf__5619348744221830008.zip
16/04/21 19:16:14 INFO SecurityManager: Changing view acls to: root
16/04/21 19:16:14 INFO SecurityManager: Changing modify acls to: root
16/04/21 19:16:14 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root)
16/04/21 19:16:15 INFO Client: Submitting application 1 to ResourceManager
16/04/21 19:16:15 INFO YarnClientImpl: Submitted application application_1461246306015_0001
16/04/21 19:16:16 INFO Client: Application report for application_1461246306015_0001 (state: ACCEPTED)
16/04/21 19:16:16 INFO Client:
client token: N/A
diagnostics: N/A
ApplicationMaster host: N/A
ApplicationMaster RPC port: -1
queue: default
start time: 1461246375622
final status: UNDEFINEDsparkcluster01.testing.com
tracking URL: http://sparkcluster01.testing.com:8088/proxy/application_1461246306015_0001/
user: root
16/04/21 19:16:17 INFO Client: Application report for application_1461246306015_0001 (state: ACCEPTED)
16/04/21 19:16:18 INFO Client: Application report for application_1461246306015_0001 (state: ACCEPTED)
16/04/21 19:16:19 INFO Client: Application report for application_1461246306015_0001 (state: ACCEPTED)
16/04/21 19:16:20 INFO Client: Application report for application_1461246306015_0001 (state: ACCEPTED)
16/04/21 19:16:21 INFO Client: Application report for application_1461246306015_0001 (state: FAILED)
16/04/21 19:16:21 INFO Client:
client token: N/A
diagnostics: Application application_1461246306015_0001 failed 2 times due to AM Container for appattempt_1461246306015_0001_000002 exited with exitCode: -1000
For more detailed output, check application tracking page:http://sparkcluster01.testing.com:8088/cluster/app/application_1461246306015_0001Then, click on links to logs of each attempt.
Diagnostics: java.io.FileNotFoundException: File file:/app/spark-test.jar does not exist
Failing this attempt. Failing the application.
ApplicationMaster host: N/A
ApplicationMaster RPC port: -1
queue: default
start time: 1461246375622
final status: FAILED
tracking URL: http://sparkcluster01.testing.com:8088/cluster/app/application_1461246306015_0001
user: root
Exception in thread "main" org.ap/app/spark-test.jarache.spark.SparkException: Application application_1461246306015_0001 finished with failed status
at org.apache.spark.deploy.yarn.Client.run(Client.scala:1034)
at org.apache.spark.deploy.yarn.Client$.main(Client.scala:1081)
at org.apache.spark.deploy.yarn.Client.main(Client.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
at org.apache.spark.deploy.SparkSubmit$.doRunMain(SparkSubmit.scala:181)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
我什至通过将我的应用程序放在 HDFS 上并在 Spark 提交命令中提供 HDFS 路径,尝试 运行 HDFS 文件系统上的 spark。即使这样,它也会在某些 Spark Conf 文件上抛出 File Not Found Exception。我正在执行以下 Spark Submit 命令,请在命令下方找到日志输出。
./bin/spark-submit --class com.test.Engine --master yarn --deploy-mode cluster hdfs://sparkcluster01.testing.com:9000/beacon/job/spark-test.jar
16/04/21 18:11:45 INFO RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
16/04/21 18:11:46 INFO Client: Requesting a new application from cluster with 1 NodeManagers
16/04/21 18:11:46 INFO Client: Verifying our application has not requested more than the maximum memory capability of the cluster (8192 MB per container)
16/04/21 18:11:46 INFO Client: Will allocate AM container, with 1408 MB memory including 384 MB overhead
16/04/21 18:11:46 INFO Client: Setting up container launch context for our AM
16/04/21 18:11:46 INFO Client: Setting up the launch environment for our AM container
16/04/21 18:11:46 INFO Client: Preparing resources for our AM container
16/04/21 18:11:46 INFO Client: Source and destination file systems are the same. Not copying file:/mi/spark/lib/spark-assembly-1.6.1-hadoop2.6.0.jar
16/04/21 18:11:47 INFO Client: Uploading resource hdfs://sparkcluster01.testing.com:9000/beacon/job/spark-test.jar -> file:/root/.sparkStaging/application_1461234217994_0017/spark-test.jar
16/04/21 18:11:49 INFO Client: Source and destination file systems are the same. Not copying file:/tmp/spark-f4eef3ac-2add-42f8-a204-be7959c26f21/__spark_conf__6818051470272245610.zip
16/04/21 18:11:50 INFO SecurityManager: Changing view acls to: root
16/04/21 18:11:50 INFO SecurityManager: Changing modify acls to: root
16/04/21 18:11:50 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root)
16/04/21 18:11:50 INFO Client: Submitting application 17 to ResourceManager
16/04/21 18:11:50 INFO YarnClientImpl: Submitted application application_1461234217994_0017
16/04/21 18:11:51 INFO Client: Application report for application_1461234217994_0017 (state: ACCEPTED)
16/04/21 18:11:51 INFO Client:
client token: N/A
diagnostics: N/A
ApplicationMaster host: N/A
ApplicationMaster RPC port: -1
queue: default
start time: 1461242510849
final status: UNDEFINED
tracking URL: http://sparkcluster01.testing.com:8088/proxy/application_1461234217994_0017/
user: root
16/04/21 18:11:52 INFO Client: Application report for application_1461234217994_0017 (state: ACCEPTED)
16/04/21 18:11:53 INFO Client: Application report for application_1461234217994_0017 (state: ACCEPTED)
16/04/21 18:11:54 INFO Client: Application report for application_1461234217994_0017 (state: FAILED)
16/04/21 18:11:54 INFO Client:
client token: N/A
diagnostics: Application application_1461234217994_0017 failed 2 times due to AM Container for appattempt_1461234217994_0017_000002 exited with exitCode: -1000
For more detailed output, check application tracking page:http://sparkcluster01.testing.com:8088/cluster/app/application_1461234217994_0017Then, click on links to logs of each attempt.
Diagnostics: File file:/tmp/spark-f4eef3ac-2add-42f8-a204-be7959c26f21/__spark_conf__6818051470272245610.zip does not exist
java.io.FileNotFoundException: File file:/tmp/spark-f4eef3ac-2add-42f8-a204-be7959c26f21/__spark_conf__6818051470272245610.zip does not exist
at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:609)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:822)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:599)
at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:421)
at org.apache.hadoop.yarn.util.FSDownload.copy(FSDownload.java:253)
at org.apache.hadoop.yarn.util.FSDownload.access[=15=]0(FSDownload.java:63)
at org.apache.hadoop.yarn.util.FSDownload.run(FSDownload.java:361)
at org.apache.hadoop.yarn.util.FSDownload.run(FSDownload.java:359)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:358)
at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:62)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Failing this attempt. Failing the application.
ApplicationMaster host: N/A
ApplicationMaster RPC port: -1
queue: default
start time: 1461242510849
final status: FAILED
tracking URL: http://sparkcluster01.testing.com:8088/cluster/app/application_1461234217994_0017
user: root
Exception in thread "main" org.apache.spark.SparkException: Application application_1461234217994_0017 finished with failed status
at org.apache.spark.deploy.yarn.Client.run(Client.scala:1034)
at org.apache.spark.deploy.yarn.Client$.main(Client.scala:1081)
at org.apache.spark.deploy.yarn.Client.main(Client.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
at org.apache.spark.deploy.SparkSubmit$.doRunMain(SparkSubmit.scala:181)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
16/04/21 18:11:55 INFO ShutdownHookManager: Shutdown hook called
16/04/21 18:11:55 INFO ShutdownHookManager: Deleting directory /tmp/spark-f4eef3ac-2add-42f8-a204-be7959c26f21
spark 配置没有指向正确的 hadoop 配置目录。 2.7.2 的 hadoop 配置位于文件路径 hadoop 2.7.2./etc/hadoop/ 而不是 /root/hadoop2.7.2/conf。当我在 spark-env.sh 下指向 HADOOP_CONF_DIR=/root/hadoop2.7.2/etc/hadoop/ 时,spark 提交开始工作,文件未找到异常消失。早些时候它指向 /root/hadoop2.7.2/conf(不存在)。如果 spark 没有指向正确的 hadoop 配置目录,它可能会导致类似的错误。我认为它可能是 spark 中的一个错误,它应该优雅地处理它而不是抛出模棱两可的错误消息。
我在 EMR 上使用 Spark 运行 时遇到了类似的错误。我已经在 Java 8 中编写了我的 spark 代码,在 EMR 集群中,spark 运行,默认情况下,在 Java 8 上运行。然后我不得不重新创建集群 JAVA_HOME 指向 java8个版本。它解决了我的问题。请检查类似的行。
我遇到了类似的问题,但问题与两个 core-site.xml 一个在 $HADOOP_CONF_DIR 中,另一个在 $[=14= 中有关].当我删除 $SPARK_HOME/conf
下的那个时,问题就消失了
每当你运行处于纱线集群模式时,本地文件应该放在所有节点中。因为,我们不知道哪个节点将成为 AM(Application Master) 节点。您的应用程序始终从 AM 节点查找文件。
我遇到过这样一种情况,我必须在 运行 时间内将 KeyStore 和 KeyPass 密码保存在一个文件中,该文件由我的 spark 作业读取。我将文件保存在 /opt 文件夹下 opt/datapipeline/config/keystorePass。但是我的应用程序一直因 FileNotFoundException 而失败。
在所有节点中放置 keystorePass 文件后,异常消失,作业成功。
其他方法是,将文件保存在 hdfs 文件系统中而不是本地文件系统中
我是 Spark 的新手。我尝试搜索但找不到合适的解决方案。我已经在两个盒子(一个主节点和另一个工作节点)上安装了 hadoop 2.7.2 我已经按照以下 link http://javadev.org/docs/hadoop/centos/6/installation/multi-node-installation-on-centos-6-non-sucure-mode/ 设置了集群 我是 运行 hadoop 和 spark 应用程序作为 root 用户测试集群。
我已经在主节点上安装了 spark,并且 spark 启动时没有任何错误。但是,当我使用 spark submit 提交作业时,即使文件存在于 error.I 中的主节点中的同一位置,我也会收到 File Not Found 异常,我在 Spark Submit 命令下执行,请查找日志输出在命令下方。
/bin/spark-submit --class com.test.Engine --master yarn --deploy-mode cluster /app/spark-test.jar
16/04/21 19:16:13 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 16/04/21 19:16:13 INFO RMProxy: Connecting to ResourceManager at /0.0.0.0:8032 16/04/21 19:16:14 INFO Client: Requesting a new application from cluster with 1 NodeManagers 16/04/21 19:16:14 INFO Client: Verifying our application has not requested more than the maximum memory capability of the cluster (8192 MB per container) 16/04/21 19:16:14 INFO Client: Will allocate AM container, with 1408 MB memory including 384 MB overhead 16/04/21 19:16:14 INFO Client: Setting up container launch context for our AM 16/04/21 19:16:14 INFO Client: Setting up the launch environment for our AM container 16/04/21 19:16:14 INFO Client: Preparing resources for our AM container 16/04/21 19:16:14 INFO Client: Source and destination file systems are the same. Not copying file:/mi/spark/lib/spark-assembly-1.6.1-hadoop2.6.0.jar 16/04/21 19:16:14 INFO Client: Source and destination file systems are the same. Not copying file:/app/spark-test.jar 16/04/21 19:16:14 INFO Client: Source and destination file systems are the same. Not copying file:/tmp/spark-120aeddc-0f87-4411-9400-22ba01096249/__spark_conf__5619348744221830008.zip 16/04/21 19:16:14 INFO SecurityManager: Changing view acls to: root 16/04/21 19:16:14 INFO SecurityManager: Changing modify acls to: root 16/04/21 19:16:14 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root) 16/04/21 19:16:15 INFO Client: Submitting application 1 to ResourceManager 16/04/21 19:16:15 INFO YarnClientImpl: Submitted application application_1461246306015_0001 16/04/21 19:16:16 INFO Client: Application report for application_1461246306015_0001 (state: ACCEPTED) 16/04/21 19:16:16 INFO Client: client token: N/A diagnostics: N/A ApplicationMaster host: N/A ApplicationMaster RPC port: -1 queue: default start time: 1461246375622 final status: UNDEFINEDsparkcluster01.testing.com tracking URL: http://sparkcluster01.testing.com:8088/proxy/application_1461246306015_0001/ user: root 16/04/21 19:16:17 INFO Client: Application report for application_1461246306015_0001 (state: ACCEPTED) 16/04/21 19:16:18 INFO Client: Application report for application_1461246306015_0001 (state: ACCEPTED) 16/04/21 19:16:19 INFO Client: Application report for application_1461246306015_0001 (state: ACCEPTED) 16/04/21 19:16:20 INFO Client: Application report for application_1461246306015_0001 (state: ACCEPTED) 16/04/21 19:16:21 INFO Client: Application report for application_1461246306015_0001 (state: FAILED) 16/04/21 19:16:21 INFO Client: client token: N/A diagnostics: Application application_1461246306015_0001 failed 2 times due to AM Container for appattempt_1461246306015_0001_000002 exited with exitCode: -1000 For more detailed output, check application tracking page:http://sparkcluster01.testing.com:8088/cluster/app/application_1461246306015_0001Then, click on links to logs of each attempt. Diagnostics: java.io.FileNotFoundException: File file:/app/spark-test.jar does not exist Failing this attempt. Failing the application. ApplicationMaster host: N/A ApplicationMaster RPC port: -1 queue: default start time: 1461246375622 final status: FAILED tracking URL: http://sparkcluster01.testing.com:8088/cluster/app/application_1461246306015_0001 user: root Exception in thread "main" org.ap/app/spark-test.jarache.spark.SparkException: Application application_1461246306015_0001 finished with failed status at org.apache.spark.deploy.yarn.Client.run(Client.scala:1034) at org.apache.spark.deploy.yarn.Client$.main(Client.scala:1081) at org.apache.spark.deploy.yarn.Client.main(Client.scala) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731) at org.apache.spark.deploy.SparkSubmit$.doRunMain(SparkSubmit.scala:181) at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206) at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121) at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
我什至通过将我的应用程序放在 HDFS 上并在 Spark 提交命令中提供 HDFS 路径,尝试 运行 HDFS 文件系统上的 spark。即使这样,它也会在某些 Spark Conf 文件上抛出 File Not Found Exception。我正在执行以下 Spark Submit 命令,请在命令下方找到日志输出。
./bin/spark-submit --class com.test.Engine --master yarn --deploy-mode cluster hdfs://sparkcluster01.testing.com:9000/beacon/job/spark-test.jar
16/04/21 18:11:45 INFO RMProxy: Connecting to ResourceManager at /0.0.0.0:8032 16/04/21 18:11:46 INFO Client: Requesting a new application from cluster with 1 NodeManagers 16/04/21 18:11:46 INFO Client: Verifying our application has not requested more than the maximum memory capability of the cluster (8192 MB per container) 16/04/21 18:11:46 INFO Client: Will allocate AM container, with 1408 MB memory including 384 MB overhead 16/04/21 18:11:46 INFO Client: Setting up container launch context for our AM 16/04/21 18:11:46 INFO Client: Setting up the launch environment for our AM container 16/04/21 18:11:46 INFO Client: Preparing resources for our AM container 16/04/21 18:11:46 INFO Client: Source and destination file systems are the same. Not copying file:/mi/spark/lib/spark-assembly-1.6.1-hadoop2.6.0.jar 16/04/21 18:11:47 INFO Client: Uploading resource hdfs://sparkcluster01.testing.com:9000/beacon/job/spark-test.jar -> file:/root/.sparkStaging/application_1461234217994_0017/spark-test.jar 16/04/21 18:11:49 INFO Client: Source and destination file systems are the same. Not copying file:/tmp/spark-f4eef3ac-2add-42f8-a204-be7959c26f21/__spark_conf__6818051470272245610.zip 16/04/21 18:11:50 INFO SecurityManager: Changing view acls to: root 16/04/21 18:11:50 INFO SecurityManager: Changing modify acls to: root 16/04/21 18:11:50 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root) 16/04/21 18:11:50 INFO Client: Submitting application 17 to ResourceManager 16/04/21 18:11:50 INFO YarnClientImpl: Submitted application application_1461234217994_0017 16/04/21 18:11:51 INFO Client: Application report for application_1461234217994_0017 (state: ACCEPTED) 16/04/21 18:11:51 INFO Client: client token: N/A diagnostics: N/A ApplicationMaster host: N/A ApplicationMaster RPC port: -1 queue: default start time: 1461242510849 final status: UNDEFINED tracking URL: http://sparkcluster01.testing.com:8088/proxy/application_1461234217994_0017/ user: root 16/04/21 18:11:52 INFO Client: Application report for application_1461234217994_0017 (state: ACCEPTED) 16/04/21 18:11:53 INFO Client: Application report for application_1461234217994_0017 (state: ACCEPTED) 16/04/21 18:11:54 INFO Client: Application report for application_1461234217994_0017 (state: FAILED) 16/04/21 18:11:54 INFO Client: client token: N/A diagnostics: Application application_1461234217994_0017 failed 2 times due to AM Container for appattempt_1461234217994_0017_000002 exited with exitCode: -1000 For more detailed output, check application tracking page:http://sparkcluster01.testing.com:8088/cluster/app/application_1461234217994_0017Then, click on links to logs of each attempt. Diagnostics: File file:/tmp/spark-f4eef3ac-2add-42f8-a204-be7959c26f21/__spark_conf__6818051470272245610.zip does not exist java.io.FileNotFoundException: File file:/tmp/spark-f4eef3ac-2add-42f8-a204-be7959c26f21/__spark_conf__6818051470272245610.zip does not exist at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:609) at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:822) at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:599) at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:421) at org.apache.hadoop.yarn.util.FSDownload.copy(FSDownload.java:253) at org.apache.hadoop.yarn.util.FSDownload.access[=15=]0(FSDownload.java:63) at org.apache.hadoop.yarn.util.FSDownload.run(FSDownload.java:361) at org.apache.hadoop.yarn.util.FSDownload.run(FSDownload.java:359) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:358) at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:62) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Failing this attempt. Failing the application. ApplicationMaster host: N/A ApplicationMaster RPC port: -1 queue: default start time: 1461242510849 final status: FAILED tracking URL: http://sparkcluster01.testing.com:8088/cluster/app/application_1461234217994_0017 user: root Exception in thread "main" org.apache.spark.SparkException: Application application_1461234217994_0017 finished with failed status at org.apache.spark.deploy.yarn.Client.run(Client.scala:1034) at org.apache.spark.deploy.yarn.Client$.main(Client.scala:1081) at org.apache.spark.deploy.yarn.Client.main(Client.scala) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731) at org.apache.spark.deploy.SparkSubmit$.doRunMain(SparkSubmit.scala:181) at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206) at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121) at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) 16/04/21 18:11:55 INFO ShutdownHookManager: Shutdown hook called 16/04/21 18:11:55 INFO ShutdownHookManager: Deleting directory /tmp/spark-f4eef3ac-2add-42f8-a204-be7959c26f21
spark 配置没有指向正确的 hadoop 配置目录。 2.7.2 的 hadoop 配置位于文件路径 hadoop 2.7.2./etc/hadoop/ 而不是 /root/hadoop2.7.2/conf。当我在 spark-env.sh 下指向 HADOOP_CONF_DIR=/root/hadoop2.7.2/etc/hadoop/ 时,spark 提交开始工作,文件未找到异常消失。早些时候它指向 /root/hadoop2.7.2/conf(不存在)。如果 spark 没有指向正确的 hadoop 配置目录,它可能会导致类似的错误。我认为它可能是 spark 中的一个错误,它应该优雅地处理它而不是抛出模棱两可的错误消息。
我在 EMR 上使用 Spark 运行 时遇到了类似的错误。我已经在 Java 8 中编写了我的 spark 代码,在 EMR 集群中,spark 运行,默认情况下,在 Java 8 上运行。然后我不得不重新创建集群 JAVA_HOME 指向 java8个版本。它解决了我的问题。请检查类似的行。
我遇到了类似的问题,但问题与两个 core-site.xml 一个在 $HADOOP_CONF_DIR 中,另一个在 $[=14= 中有关].当我删除 $SPARK_HOME/conf
下的那个时,问题就消失了每当你运行处于纱线集群模式时,本地文件应该放在所有节点中。因为,我们不知道哪个节点将成为 AM(Application Master) 节点。您的应用程序始终从 AM 节点查找文件。
我遇到过这样一种情况,我必须在 运行 时间内将 KeyStore 和 KeyPass 密码保存在一个文件中,该文件由我的 spark 作业读取。我将文件保存在 /opt 文件夹下 opt/datapipeline/config/keystorePass。但是我的应用程序一直因 FileNotFoundException 而失败。
在所有节点中放置 keystorePass 文件后,异常消失,作业成功。
其他方法是,将文件保存在 hdfs 文件系统中而不是本地文件系统中