重试连接到服务器:hadoopmaster/IP:9000
Retrying connect to server: hadoopmaster/IP:9000
我安装了 Hadoop 2.9.0。我检查了从属节点中的 jps,在从属节点中是 运行 datanode
和 nodemanager
,在主节点中是 namenode
和 resourcemanager
。请帮助,为什么这个错误?根据提供的解决方案,我禁用了 firewall
、SELinux
和 IPv6
。问题是网络 UI 中的 Live nodes
是 0。
/etc/hosts:
1 localhost
127.0.1.1 hadoopmaster
#The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
192.168.111.154 hadoopmaster
192.168.111... hadoopslave1
192.168.111... hadoopslave2
192.168.111... hadoopslave3
所有节点中的配置:
核心-site.xml:
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://hadoopmaster:9000</value>
</property>
</configuration>
yarn-site.xml:
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>hadoopmaster:8025</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>hadoopmaster:8030</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>hadoopmaster:8050</value>
</property>
</configuration>
netstat -nap | grep 9000
的输出:
tcp 0 0 127.0.1.1:9000 0.0.0.0:* LISTEN 18205/java
tcp 0 0 127.0.0.1:59048 127.0.1.1:9000 TIME_WAIT -
我重新启动了 hadoop 并再次格式化了 namenode 但我仍然得到这个错误:(hadoop-hadoop-datanode-hadoopslave1.log)
2018-06-11 02:42:33,593 INFO org.apache.hadoop.ipc.Client: Retrying
connect to server: hadoopmaster/192.168.111.154:9000. Already tried 4
time(s); retry policy is
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
MILLISECONDS)
我得到了解决方案。 编辑/etc/hosts
- 定义 FQDN
- 删除 127.0.1.1
我安装了 Hadoop 2.9.0。我检查了从属节点中的 jps,在从属节点中是 运行 datanode
和 nodemanager
,在主节点中是 namenode
和 resourcemanager
。请帮助,为什么这个错误?根据提供的解决方案,我禁用了 firewall
、SELinux
和 IPv6
。问题是网络 UI 中的 Live nodes
是 0。
/etc/hosts:
1 localhost
127.0.1.1 hadoopmaster
#The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
192.168.111.154 hadoopmaster
192.168.111... hadoopslave1
192.168.111... hadoopslave2
192.168.111... hadoopslave3
所有节点中的配置:
核心-site.xml:
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://hadoopmaster:9000</value>
</property>
</configuration>
yarn-site.xml:
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>hadoopmaster:8025</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>hadoopmaster:8030</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>hadoopmaster:8050</value>
</property>
</configuration>
netstat -nap | grep 9000
的输出:
tcp 0 0 127.0.1.1:9000 0.0.0.0:* LISTEN 18205/java
tcp 0 0 127.0.0.1:59048 127.0.1.1:9000 TIME_WAIT -
我重新启动了 hadoop 并再次格式化了 namenode 但我仍然得到这个错误:(hadoop-hadoop-datanode-hadoopslave1.log)
2018-06-11 02:42:33,593 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoopmaster/192.168.111.154:9000. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
我得到了解决方案。 编辑/etc/hosts
- 定义 FQDN
- 删除 127.0.1.1