字数统计作业在 Hadoop 中挂起:编译、提交、接受且永不终止

Word-count job hangs in Hadoop: compiled, submitted, accepted and never terminates

在 AWS EC2 上成功配置了 Hadoop 集群,至少到了在每种类型的节点上发出 jps 命令会引发以下输出的程度:

6544 ResourceManager
4305 JobHistoryServer
7004 Jps
6252 NameNode

同样:

2753 NodeManager
2614 DataNode
3051 Jps

按照创建 WordCount 程序的标准 Apache 教程,我已经完成了所有先决条件步骤,编译了 Java class 以及 .jar,作为 described here.

但是,当我使用以下命令执行程序时:

$HADOOP_HOME/bin/hadoop jar wc.jar WordCount /user/wordcount /user/output2

作业挂起并在我的控制台上输出以下内容:

管理网络界面显示以下信息:

也许跟我的yarn有关?

在创建此环境时,我主要遵循此 tutorial

以下是我整理配置文件的方式:

yarn-site.xml:

<configuration>
    <property>
        <name>yarn.scheduler.minimum-allocation-mb</name>
        <value>128</value>
        <description>Minimum limit of memory to allocate to each container request at the Resource Manager.</description>
    </property>
    <property>
        <name>yarn.scheduler.maximum-allocation-mb</name>
        <value>2048</value>
        <description>Maximum limit of memory to allocate to each container request at the Resource Manager.</description>
    </property>
    <property>
        <name>yarn.scheduler.minimum-allocation-vcores</name>
        <value>1</value>
        <description>The minimum allocation for every container request at the RM, in terms of virtual CPU cores. Requests lower than this won't take effect, and the specified value will get allocated the minimum.</description>
    </property>
    <property>
        <name>yarn.scheduler.maximum-allocation-vcores</name>
        <value>2</value>
        <description>The maximum allocation for every container request at the RM, in terms of virtual CPU cores. Requests higher than this won't take effect, and will get capped to this value.</description>
    </property>
    <property>
        <name>yarn.nodemanager.resource.memory-mb</name>
        <value>4096</value>
        <description>Physical memory, in MB, to be made available to running containers</description>
    </property>
    <property>
        <name>yarn.nodemanager.resource.cpu-vcores</name>
        <value>4</value>
        <description>Number of CPU cores that can be allocated for containers.</description>
    </property>
</configuration>

mapred-site.xml:

<configuration>
  <property>
    <name>mapreduce.framework.name</name>
    <value>yarn</value>
  </property>
  <property>
    <name>mapreduce.jobhistory.address</name>
    <value>master:10020</value>
  </property>
  <property>
    <name>mapreduce.jobhistory.webapp.address</name>
    <value>master:19888</value>
  </property>
  <property>
    <name>yarn.app.mapreduce.am.staging-dir</name>
    <value>/user/app</value>
  </property>
  <property>
    <name>mapred.child.java.opts</name>
    <value>-Djava.security.egd=file:/dev/../dev/urandom</value>
  </property>
</configuration>

hdfs-site.xml:

<configuration>  
  <property>
    <name>dfs.namenode.name.dir</name>
    <value>file:/usr/local/hadoop_work/hdfs/namenode</value>
  </property>
  <property>
    <name>dfs.datanode.data.dir</name>
    <value>file:/usr/local/hadoop_work/hdfs/datanode</value>
  </property>
  <property>
    <name>dfs.namenode.checkpoint.dir</name>
    <value>file:/usr/local/hadoop_work/hdfs/namesecondary</value>
  </property>
  <property>
    <name>dfs.replication</name>
    <value>2</value>
  </property>
  <property>
    <name>dfs.secondary.http.address</name>
    <value>172.31.46.85:50090</value>
  </property>
</configuration>

core-site.xml:

<configuration>
  <property>
    <name>fs.defaultFS</name>
    <value>hdfs://master:8020/</value>
  </property>
  <property>
    <name>fs.default.name</name>
    <value>hdfs://master:9000/</value>
  </property>
  <property>
    <name>hadoop.tmp.dir</name>
    <value>/tmp</value>
    <description>A base for other temporary directories.</description>
  </property>
</configuration>

也许重要的是看看我的 ~/.bashrc 是如何配置的,除了样板,它看起来像这样:

export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
export PATH=${JAVA_HOME}/jre/lib:${PATH}
export HADOOP_CLASSPATH=${JAVA_HOME}/lib/tools.jar

# export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
# adding support for jre
export PATH=$PATH:$JAVA_HOME/jre/bin
export HADOOP_HOME=/usr/local/hadoop
export PATH=$PATH:$HADOOP_HOME/bin
export PATH=$PATH:$HADOOP_HOME/sbin
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export YARN_HOME=$HADOOP_HOME
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib"
export CLASSPATH=$CLASSPATH:/usr/local/hadoop/lib/*:.

#trying to get datanode to work :/
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop

export HADOOP_OPTS="$HADOOP_OPTS -Djava.security.egd=file:/dev/../dev/urandom"

确保删除此处的所有文件夹:

/usr/local/hadoop_work/hdfs/namenode/
/usr/local/hadoop_work/hdfs/datanode
/usr/local/hadoop_work/hdfs/namesecondary

通常只需要按照 rm -rf current/.

的方式做一些事情

相应配置:

纱-site.xml

<configuration>
  <property>
     <name>yarn.nodemanager.aux-services</name>
     <value>mapreduce_shuffle</value>
  </property>
  <property>
     <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
     <value>org.apache.hadoop.mapred.ShuffleHandler</value>
  </property>
  <property>
    <name>yarn.resourcemanager.hostname</name>
    <value>master</value>
  </property>
</configuration>

事实证明设置 yarn.resourcemanager.hostname 非常重要,这让我困惑了一段时间:/

核心-site.xml

<configuration>
  <property>
     <name>fs.defaultFS</name>
     <value>hdfs://master:9000</value>
  </property>
</configuration>

mapred-site.xml

<configuration>
  <property>
     <name>mapreduce.framework.name</name>
     <value>yarn</value>
  </property>
</configuration>

hdfs-site.xml

<configuration>
  <property>
     <name>dfs.replication</name>
     <value>1</value>
  </property>
  <property>
     <name>dfs.namenode.name.dir</name>
     <value>file:/usr/local/hadoop_work/hdfs/namenode</value>
  </property>
  <property>
    <name>dfs.namenode.checkpoint.dir</name>
    <value>file:/usr/local/hadoop_work/hdfs/namesecondary</value>
  </property>
  <property>
     <name>dfs.datanode.data.dir</name>
     <value>file:/usr/local/hadoop_work/hdfs/datanode</value>
  </property>
  <property>
    <name>dfs.secondary.http.address</name>
    <value>172.31.46.85:50090</value>
  </property>
</configuration>

/etc/hosts

666.13.46.70  master
666.13.35.80  slave1
666.13.43.131 slave2

本质上,您想看这个:

执行命令...

非常简单的教程:

hadoop jar /usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-*.jar wordcount /input /output

为此example:

$HADOOP_HOME/bin/hadoop jar wc.jar WordCount /input /output