Hadoop "failed on connection exception: java.net.ConnectException: Connection refused"
Hadoop "failed on connection exception: java.net.ConnectException: Connection refused"
我正在尝试 运行 本地模式下的 Hadoop 命令。
我在 运行ning Mac OS X 10.10.5 上,在将文件放入 HDFS 时出现错误。
这是我的 Hadoop 命令的错误消息:
$ sudo hadoop fs -put HG00103.mapped.ILLUMINA.bwa.GBR.low_coverage.20120522.bam /usr/ds/genomics
Password:
15/09/25 10:10:50 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
put: Call From BlueMeanie/10.0.1.5 to localhost:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
以下是我的系统的详细信息:
$ java -version
java version "1.8.0_05"
Java(TM) SE Runtime Environment (build 1.8.0_05-b13)
Java HotSpot(TM) 64-Bit Server VM (build 25.5-b02, mixed mode)
$ hadoop version
Hadoop 2.3.0
Subversion http://svn.apache.org/repos/asf/hadoop/common -r 1567123
Compiled by jenkins on 2014-02-11T13:40Z
Compiled with protoc 2.5.0
From source with checksum dfe46336fbc6a044bc124392ec06b85
This command was run using /Users/davidlaxer/hadoop-`2.3.0/share/hadoop/common/hadoop-common-2.3.0.jar`
$ cat /etc/hosts
##
# Host Database
#
# localhost is used to configure the loopback interface
# when the system is booting. Do not change this entry.
##
127.0.0.1 localhost
10.0.1.5 BlueMeanie
255.255.255.255 broadcasthost
::1 localhost
fe80::1%lo0 localhost
$ telnet 10.1.1.5 9000
Trying 10.1.1.5...
^C
$ telnet localhost 9000
Trying ::1...
telnet: connect to address ::1: Connection refused
Trying 127.0.0.1...
telnet: connect to address 127.0.0.1: Connection refused
Trying fe80::1...
telnet: connect to address fe80::1: Connection refused
telnet: Unable to connect to remote host
$ env | grep HADOOP
HADOOP_HOME=/Users/dbl/hadoop-2.3.0/
HADOOP_CONF_DIR=/Users/dbl/hadoop-2.3.0/etc
$ cat core_site.xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>
$ cat hdfs-site.xml
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
$ cat yarn_site.xml
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
</configuration>
$ cat mapred_site.xml
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
$ sbin/start-dfs.sh
Starting namenodes on [2015-09-25 16:36:54,540 WARN [main] util.NativeCodeLoader (NativeCodeLoader.java:<clinit>(62)) - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
localhost]
[main]: ssh: Could not resolve hostname [main]: nodename nor servname provided, or not known
-: ssh: Could not resolve hostname -: nodename nor servname provided, or not known
Unable: ssh: Could not resolve hostname Unable: nodename nor servname provided, or not known
native-hadoop: ssh: Could not resolve hostname native-hadoop: nodename nor servname provided, or not known
load: ssh: Could not resolve hostname load: nodename nor servname provided, or not known
to: ssh: Could not resolve hostname to: nodename nor servname provided, or not known
for: ssh: Could not resolve hostname for: nodename nor servname provided, or not known
16:36:54,540: ssh: Could not resolve hostname 16:36:54,540: nodename nor servname provided, or not known
your: ssh: Could not resolve hostname your: nodename nor servname provided, or not known
platform...: ssh: Could not resolve hostname platform...: nodename nor servname provided, or not known
using: ssh: Could not resolve hostname using: nodename nor servname provided, or not known
builtin-java: ssh: Could not resolve hostname builtin-java: nodename nor servname provided, or not known
where: ssh: Could not resolve hostname where: nodename nor servname provided, or not known
applicable: ssh: Could not resolve hostname applicable: nodename nor servname provided, or not known
localhost: namenode running as process 99664. Stop it first.
2015-09-25: ssh: Could not resolve hostname 2015-09-25: nodename nor servname provided, or not known
WARN: ssh: Could not resolve hostname WARN: nodename nor servname provided, or not known
library: ssh: Could not resolve hostname library: nodename nor servname provided, or not known
classes: ssh: Could not resolve hostname classes: nodename nor servname provided, or not known
(NativeCodeLoader.java:<clinit>(62)): ssh: connect to host (NativeCodeLoader.java:<clinit>(62)) port 22: Operation timed out
util.NativeCodeLoader: ssh: connect to host util.NativeCodeLoader port 22: Operation timed out
cat: /Users/davidlaxer/hadoop-2.3.0/etc/hadoop/conf/slaves: No such file or directory
Starting secondary namenodes [2015-09-25 16:39:26,863 WARN [main] util.NativeCodeLoader (NativeCodeLoader.java:<clinit>(62)) - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
0.0.0.0]
WARN: ssh: Could not resolve hostname WARN: nodename nor servname provided, or not known
[main]: ssh: Could not resolve hostname [main]: nodename nor servname provided, or not known
Unable: ssh: Could not resolve hostname Unable: nodename nor servname provided, or not known
to: ssh: Could not resolve hostname to: nodename nor servname provided, or not known
-: ssh: Could not resolve hostname -: nodename nor servname provided, or not known
native-hadoop: ssh: Could not resolve hostname native-hadoop: nodename nor servname provided, or not known
library: ssh: Could not resolve hostname library: nodename nor servname provided, or not known
for: ssh: Could not resolve hostname for: nodename nor servname provided, or not known
your: ssh: Could not resolve hostname your: nodename nor servname provided, or not known
platform...: ssh: Could not resolve hostname platform...: nodename nor servname provided, or not known
16:39:26,863: ssh: Could not resolve hostname 16:39:26,863: nodename nor servname provided, or not known
using: ssh: Could not resolve hostname using: nodename nor servname provided, or not known
builtin-java: ssh: Could not resolve hostname builtin-java: nodename nor servname provided, or not known
classes: ssh: Could not resolve hostname classes: nodename nor servname provided, or not known
where: ssh: Could not resolve hostname where: nodename nor servname provided, or not known
applicable: ssh: Could not resolve hostname applicable: nodename nor servname provided, or not known
0.0.0.0: secondarynamenode running as process 99006. Stop it first.
2015-09-25: ssh: Could not resolve hostname 2015-09-25: nodename nor servname provided, or not known
load: ssh: Could not resolve hostname load: nodename nor servname provided, or not known
(NativeCodeLoader.java:<clinit>(62)): ssh: connect to host (NativeCodeLoader.java:<clinit>(62)) port 22: Operation timed out
util.NativeCodeLoader: ssh: connect to host util.NativeCodeLoader port 22: Operation timed out
Well 运行ning in single node mode doesn't require you start Namenode, Datanode 等
Single Node or Standalone 使用标准 hadoop 安装 开箱即用 要求您将 fs.defaultFS
设置为 file:///
,这意味着您的本地文件系统。
如果你想在 pseudo distributed 中 运行(我猜你想要从你的配置和你 运行 start-dfs.sh
) 您还必须记住,守护进程之间的通信是通过 ssh
执行的,因此您需要:
- 编辑您的
shd_config
文件(在安装 ssh 和 支持 shd_config
之后)
- 添加端口 9000(我相信端口 8020)。
然后,重新启动 ssh
并检查是否可以通过 ssh
连接到 localhost
。这可能就是您在启动 Namenode 和 Datanode 时出现的奇怪消息的全部内容。
我正在尝试 运行 本地模式下的 Hadoop 命令。 我在 运行ning Mac OS X 10.10.5 上,在将文件放入 HDFS 时出现错误。 这是我的 Hadoop 命令的错误消息:
$ sudo hadoop fs -put HG00103.mapped.ILLUMINA.bwa.GBR.low_coverage.20120522.bam /usr/ds/genomics
Password:
15/09/25 10:10:50 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
put: Call From BlueMeanie/10.0.1.5 to localhost:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
以下是我的系统的详细信息:
$ java -version
java version "1.8.0_05"
Java(TM) SE Runtime Environment (build 1.8.0_05-b13)
Java HotSpot(TM) 64-Bit Server VM (build 25.5-b02, mixed mode)
$ hadoop version
Hadoop 2.3.0
Subversion http://svn.apache.org/repos/asf/hadoop/common -r 1567123
Compiled by jenkins on 2014-02-11T13:40Z
Compiled with protoc 2.5.0
From source with checksum dfe46336fbc6a044bc124392ec06b85
This command was run using /Users/davidlaxer/hadoop-`2.3.0/share/hadoop/common/hadoop-common-2.3.0.jar`
$ cat /etc/hosts
##
# Host Database
#
# localhost is used to configure the loopback interface
# when the system is booting. Do not change this entry.
##
127.0.0.1 localhost
10.0.1.5 BlueMeanie
255.255.255.255 broadcasthost
::1 localhost
fe80::1%lo0 localhost
$ telnet 10.1.1.5 9000
Trying 10.1.1.5...
^C
$ telnet localhost 9000
Trying ::1...
telnet: connect to address ::1: Connection refused
Trying 127.0.0.1...
telnet: connect to address 127.0.0.1: Connection refused
Trying fe80::1...
telnet: connect to address fe80::1: Connection refused
telnet: Unable to connect to remote host
$ env | grep HADOOP
HADOOP_HOME=/Users/dbl/hadoop-2.3.0/
HADOOP_CONF_DIR=/Users/dbl/hadoop-2.3.0/etc
$ cat core_site.xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>
$ cat hdfs-site.xml
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
$ cat yarn_site.xml
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
</configuration>
$ cat mapred_site.xml
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
$ sbin/start-dfs.sh
Starting namenodes on [2015-09-25 16:36:54,540 WARN [main] util.NativeCodeLoader (NativeCodeLoader.java:<clinit>(62)) - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
localhost]
[main]: ssh: Could not resolve hostname [main]: nodename nor servname provided, or not known
-: ssh: Could not resolve hostname -: nodename nor servname provided, or not known
Unable: ssh: Could not resolve hostname Unable: nodename nor servname provided, or not known
native-hadoop: ssh: Could not resolve hostname native-hadoop: nodename nor servname provided, or not known
load: ssh: Could not resolve hostname load: nodename nor servname provided, or not known
to: ssh: Could not resolve hostname to: nodename nor servname provided, or not known
for: ssh: Could not resolve hostname for: nodename nor servname provided, or not known
16:36:54,540: ssh: Could not resolve hostname 16:36:54,540: nodename nor servname provided, or not known
your: ssh: Could not resolve hostname your: nodename nor servname provided, or not known
platform...: ssh: Could not resolve hostname platform...: nodename nor servname provided, or not known
using: ssh: Could not resolve hostname using: nodename nor servname provided, or not known
builtin-java: ssh: Could not resolve hostname builtin-java: nodename nor servname provided, or not known
where: ssh: Could not resolve hostname where: nodename nor servname provided, or not known
applicable: ssh: Could not resolve hostname applicable: nodename nor servname provided, or not known
localhost: namenode running as process 99664. Stop it first.
2015-09-25: ssh: Could not resolve hostname 2015-09-25: nodename nor servname provided, or not known
WARN: ssh: Could not resolve hostname WARN: nodename nor servname provided, or not known
library: ssh: Could not resolve hostname library: nodename nor servname provided, or not known
classes: ssh: Could not resolve hostname classes: nodename nor servname provided, or not known
(NativeCodeLoader.java:<clinit>(62)): ssh: connect to host (NativeCodeLoader.java:<clinit>(62)) port 22: Operation timed out
util.NativeCodeLoader: ssh: connect to host util.NativeCodeLoader port 22: Operation timed out
cat: /Users/davidlaxer/hadoop-2.3.0/etc/hadoop/conf/slaves: No such file or directory
Starting secondary namenodes [2015-09-25 16:39:26,863 WARN [main] util.NativeCodeLoader (NativeCodeLoader.java:<clinit>(62)) - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
0.0.0.0]
WARN: ssh: Could not resolve hostname WARN: nodename nor servname provided, or not known
[main]: ssh: Could not resolve hostname [main]: nodename nor servname provided, or not known
Unable: ssh: Could not resolve hostname Unable: nodename nor servname provided, or not known
to: ssh: Could not resolve hostname to: nodename nor servname provided, or not known
-: ssh: Could not resolve hostname -: nodename nor servname provided, or not known
native-hadoop: ssh: Could not resolve hostname native-hadoop: nodename nor servname provided, or not known
library: ssh: Could not resolve hostname library: nodename nor servname provided, or not known
for: ssh: Could not resolve hostname for: nodename nor servname provided, or not known
your: ssh: Could not resolve hostname your: nodename nor servname provided, or not known
platform...: ssh: Could not resolve hostname platform...: nodename nor servname provided, or not known
16:39:26,863: ssh: Could not resolve hostname 16:39:26,863: nodename nor servname provided, or not known
using: ssh: Could not resolve hostname using: nodename nor servname provided, or not known
builtin-java: ssh: Could not resolve hostname builtin-java: nodename nor servname provided, or not known
classes: ssh: Could not resolve hostname classes: nodename nor servname provided, or not known
where: ssh: Could not resolve hostname where: nodename nor servname provided, or not known
applicable: ssh: Could not resolve hostname applicable: nodename nor servname provided, or not known
0.0.0.0: secondarynamenode running as process 99006. Stop it first.
2015-09-25: ssh: Could not resolve hostname 2015-09-25: nodename nor servname provided, or not known
load: ssh: Could not resolve hostname load: nodename nor servname provided, or not known
(NativeCodeLoader.java:<clinit>(62)): ssh: connect to host (NativeCodeLoader.java:<clinit>(62)) port 22: Operation timed out
util.NativeCodeLoader: ssh: connect to host util.NativeCodeLoader port 22: Operation timed out
Well 运行ning in single node mode doesn't require you start Namenode, Datanode 等
Single Node or Standalone 使用标准 hadoop 安装 开箱即用 要求您将 fs.defaultFS
设置为 file:///
,这意味着您的本地文件系统。
如果你想在 pseudo distributed 中 运行(我猜你想要从你的配置和你 运行 start-dfs.sh
) 您还必须记住,守护进程之间的通信是通过 ssh
执行的,因此您需要:
- 编辑您的
shd_config
文件(在安装 ssh 和 支持shd_config
之后) - 添加端口 9000(我相信端口 8020)。
然后,重新启动 ssh
并检查是否可以通过 ssh
连接到 localhost
。这可能就是您在启动 Namenode 和 Datanode 时出现的奇怪消息的全部内容。