无法使用 oozie 运行 shell 脚本

not able to run the shell script with oozie

您好,我正在尝试通过 oozie.while 运行 shell 脚本来 运行 shell 脚本,但我收到以下错误。

org.apache.oozie.action.hadoop.ShellMain], exit code [1]

我的job.properties文件

nameNode=hdfs://ip-172-31-41-199.us-west-2.compute.internal:8020
jobTracker=ip-172-31-41-199.us-west-2.compute.internal:8032
queueName=default
oozie.libpath=${nameNode}/user/oozie/share/lib/
oozie.use.system.libpath=true
oozie.wf.rerun.failnodes=true
oozieProjectRoot=shell_example
oozie.wf.application.path=${nameNode}/user/karun/${oozieProjectRoot}/apps/shell

我的workflow.xml

<workflow-app xmlns="uri:oozie:workflow:0.1" name="pi.R example">
<start to="shell-node"/>
<action name="shell-node">
<shell xmlns="uri:oozie:shell-action:0.1">
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node>
<configuration>
<property>
<name>mapred.job.queue.name</name>
<value>${queueName}</value>
</property>
</configuration>
<exec>script.sh</exec>
<file>/user/karun/oozie-oozi/script.sh#script.sh</file>
<capture-output/>
</shell>
<ok to="end"/>
<error to="fail"/>
 </action>
 <kill name="fail">
 <message>Incorrect output</message>
</kill>
<end name="end"/>
</workflow-app>

我的 shell 脚本- script.sh

export SPARK_HOME=/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/spark
export YARN_CONF_DIR=/etc/hadoop/conf
export JAVA_HOME=/usr/java/jdk1.7.0_67-cloudera
export HADOOP_CMD=/usr/bin/hadoop
/SparkR-pkg/lib/SparkR/sparkR-submit --master yarn-client examples/pi.R yarn-client 4 

错误日志文件

WEBHCAT_DEFAULT_XML=/opt/cloudera/parcels/CDH-5.4.2- 1.cdh5.4.2.p0.2/etc/hive-webhcat/conf.dist/webhcat-default.xml:
CDH_KMS_HOME=/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-kms:
LANG=en_US.UTF-8:
HADOOP_MAPRED_HOME=/opt/cloudera/parcels/CDH-5.4.2-  1.cdh5.4.2.p0.2/lib/hadoop-mapreduce:

============================================= ====================

Invoking Shell command line now >>

Stdoutput Running /opt/cloudera/parcels/CDH-5.4.2-  
1.cdh5.4.2.p0.2/lib/spark/bin/spark-submit --class  edu.berkeley.cs.amplab.sparkr.SparkRRunner --files hdfs://ip-172-31-41-199.us-west-2.compute.internal:8020/user/karun/examples/pi.R --master yarn-client 
/SparkR-pkg/lib/SparkR/sparkr-assembly-0.1.jar hdfs://ip-172-31-41-199.us-west-  2.compute.internal:8020/user/karun/examples/pi.R yarn-client 4
Stdoutput Fatal error: cannot open file 'pi.R': No such file or directory
Exit code of the Shell command 2
<<< Invocation of Shell command completed <<<
<<< Invocation of Main class completed <<<
 Failing Oozie Launcher, Main class  [org.apache.oozie.action.hadoop.ShellMain], exit code [1]

 Oozie Launcher failed, finishing Hadoop job gracefully

 Oozie Launcher, uploading action data to HDFS sequence file: hdfs://ip-172-31-41-199.us-west-2.compute.internal:8020/user/karun/oozie-oozi/0000035-150722003725443-oozie-oozi-W/shell-node--shell/action-data.seq

 Oozie Launcher ends

我不知道如何解决issue.any将不胜感激。

sparkR-submit  ...  examples/pi.R  ...

Fatal error: cannot open file 'pi.R': No such file or directory

消息非常明确:您的 shell 试图从 本地文件系统 读取 R 脚本。但实际上 what 是本地的???

Oozie 使用 YARN 运行 你的 shell;所以 YARN 在 随机机器 上分配一个容器。这是你必须放在脑海中的东西,这样它才能成为一种反射:Oozie 动作所需的所有资源 (脚本、库、配置文件等) 必须

  1. 预先在 HDFS 中可用
  2. 由于 Oozie 脚本中的 <file> 指令,在执行时下载
  3. 在当前工作目录中作为本地文件访问

你的情况:

<exec>script.sh</exec>
<file>/user/karun/oozie-oozi/script.sh</file>
<file>/user/karun/some/place/pi.R</file>

然后

sparkR-submit  ...  pi.R  ...