所有节点启动失败

all nodes fail to start up

我已经设置了我的配置文件并格式化了我的文件系统,但是每当我尝试执行启动 shell 脚本时,我都会收到此错误。

下面我放了hstart的别名

错误:

computer:~ seanplowman$ hstart
18/04/14 23:34:43 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [localhost]
localhost: /usr/local/hadoop/sbin/hadoop-daemon.sh: line 69: [: Mac.out: integer expression expected
localhost: starting namenode, logging to /usr/local/hadoop/logs/hadoop-seanplowman-namenode-Seans
localhost: Error: Could not find or load main class Mac.log

localhost: /usr/local/hadoop/sbin/hadoop-daemon.sh: line 69: [: Mac.out: integer expression expected
localhost: starting datanode, logging to /usr/local/hadoop/logs/hadoop-seanplowman-datanode-Seans
localhost: Error: Could not find or load main class Mac.log

Starting secondary namenodes [0.0.0.0]
0.0.0.0: /usr/local/hadoop/sbin/hadoop-daemon.sh: line 69: [: Mac.out: integer expression expected
0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop/logs/hadoop-seanplowman-secondarynamenode-Seans
0.0.0.0: Error: Could not find or load main class Mac.log

18/04/14 23:35:08 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
starting yarn daemons
/usr/local/hadoop/sbin/yarn-daemon.sh: line 60: [: Mac.out: integer expression expected
starting resourcemanager, logging to /usr/local/hadoop/logs/yarn-seanplowman-resourcemanager-Seans
Error: Could not find or load main class Mac.log

localhost: /usr/local/hadoop/sbin/yarn-daemon.sh: line 60: [: Mac.out: integer expression expected
localhost: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-seanplowman-nodemanager-Seans
localhost: Error: Could not find or load main class Mac.log

jps 还表示 none 个节点在 运行 启动脚本之后启动。根据我的研究,我的主机名似乎有问题,但尝试更改这些主机名并没有解决任何问题。

我将提供我的其他配置文件来展示它们是如何为上下文设置的。

/usr/local/hadoop/etc/hadoop/core-site.xml

<configuration>
 <property>
  <name>hadoop.tmp.dir</name>
  <value>/app/hadoop/tmp</value>
  <description>A base for other temporary directories.</description>
 </property>

 <property>
  <name>fs.default.name</name>
  <value>hdfs://localhost:9000</value>
  <description>The name of the default file system.  A URI whose scheme and 
  authority determine the FileSystem implementation.  The uri's scheme determines 
  the config property (fs.SCHEME.impl) naming the FileSystem implementation
  class.  The uri's authority is used to determine the host, port, etc. for a filesystem.
  </description>
 </property>
</configuration>

/usr/local/hadoop/etc/hadoop/hdfs-site.xml

<configuration>
 <property>
  <name>dfs.replication</name>
  <value>1</value>
  <description>Default block replication.
  The actual number of replications can be specified when the file is created.
  The default is used if replication is not specified in create time.
  </description>
 </property>
 <property>
   <name>dfs.namenode.name.dir</name>
   <value>file:/usr/local/hadoop_store/hdfs/namenode</value>
 </property>
 <property>
   <name>dfs.datanode.data.dir</name>
   <value>file:/usr/local/hadoop_store/hdfs/datanode</value>
 </property>
</configuration>

/usr/local/hadoop/etc/hadoop/mapred-site.xml

<configuration>
 <property>
  <name>mapred.job.tracker</name>
  <value>localhost:9010</value>
  <description>The host and port that the MapReduce job tracker runs
  at.  If "local", then jobs are run in-process as a single map
  and reduce task.
  </description>
 </property>
</configuration>

我也对我的 hadoop-env.sh 进行了一些更改。我会把它们放在下面。

/usr/local/hadoop/etc/hadoop/hadoop-env.sh

export JAVA_HOME=/Library/Java/JavaVirtualMachines/jdk1.8.0_60.jdk/Contents/Home

export HADOOP_OPTS="$HADOOP_OPTS -Djava.net.preferIPv4Stack=true -Djava.security.krb5.realm= -Djava.security.krb5.kdc="

.bashrc

#Hadoop variables
export JAVA_HOME=/Library/Java/JavaVirtualMachines/jdk1.8.0_60.jdk/Contents/Home
export HADOOP_INSTALL=/usr/local/hadoop
export PATH=$PATH:$HADOOP_INSTALL/bin
export PATH=$PATH:$HADOOP_INSTALL/sbin
export HADOOP_MAPRED_HOME=$HADOOP_INSTALL
export HADOOP_COMMON_HOME=$HADOOP_INSTALL
export HADOOP_HDFS_HOME=$HADOOP_INSTALL
export YARN_HOME=$HADOOP_INSTALL
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_INSTALL/lib/native
export HADOOP_OPTS="-Djava.library.path=$HADOOP_INSTALL/lib/native"
###end of paste

.bash_profile

alias hstart="/usr/local/hadoop/sbin/start-dfs.sh;/usr/local/hadoop/sbin/start-yarn.sh"
alias hstop="/usr/local/hadoop/sbin/stop-yarn.sh;/usr/local/hadoop/sbin/stop-dfs.sh"

我不确定从这里开始的下一步是查看几乎所有涉及的文件。

我认为您的 Mac 主机名中有空格。例如,Seans Mac

默认日志文件使用

命名

HDFS:log=$HADOOP_LOG_DIR/hadoop-$HADOOP_IDENT_STRING-$command-$HOSTNAME.out
纱线:log=$YARN_LOG_DIR/yarn-$YARN_IDENT_STRING-$command-$HOSTNAME.out

其中 $HOSTNAME 是问题所在,空格是意外的。

如果你查看输出,你会注意到 hadoop-seanplowman-namenode-Seans,所以我怀疑

HADOOP_IDENT_STRING = 用户 运行 脚本 = seanplowman
command = hadoop
HOSTNAME = Seans Mac

看看没有空格的 fixing the hostname 是否改变了什么。

如果不是,请编辑 yarn-daemon.shhadoop-daemon.sh 脚本以

开头
#!/usr/bin/env bash
set -xv

然后用输出编辑问题