hadoop 名称节点端口正在使用中
hadoop namenode port in use
这实际上是一个备用 HA 名称节点。它配置了与主服务器相同的设置,hdfs namenode -bootstrapStandby
成功 运行。它开始出现在配置文件中定义的标准 HTTP 端口 50070 上:
<property>
<name>dfs.namenode.http-address.ha-hadoop.namenode2</name>
<value>namenode2:50070</value>
</property>
启动开始OK然后点击:
15/02/02 08:06:17 INFO hdfs.DFSUtil: Starting Web-server for hdfs at: http://hadoop1:50070
15/02/02 08:06:17 INFO mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
15/02/02 08:06:17 INFO http.HttpRequestLog: Http request log for http.requests.namenode is not defined
15/02/02 08:06:17 INFO http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
15/02/02 08:06:17 INFO http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs
15/02/02 08:06:17 INFO http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
15/02/02 08:06:17 INFO http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
15/02/02 08:06:17 INFO http.HttpServer2: Added filter 'org.apache.hadoop.hdfs.web.AuthFilter' (class=org.apache.hadoop.hdfs.web.AuthFilter)
15/02/02 08:06:17 INFO http.HttpServer2: addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/*
15/02/02 08:06:17 INFO http.HttpServer2: HttpServer.start() threw a non Bind IOException
java.net.BindException: Port in use: hadoop1:50070
at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:890)
at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:826)
at org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:142)
at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:695)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:585)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:754)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:738)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1427)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1493)
Caused by: java.net.BindException: Cannot assign requested address
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:444)
at sun.nio.ch.Net.bind(Net.java:436)
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:885)
... 8 more
15/02/02 08:06:17 INFO impl.MetricsSystemImpl: Stopping NameNode metrics system...
15/02/02 08:06:17 INFO impl.MetricsSystemImpl: NameNode metrics system stopped.
15/02/02 08:06:17 INFO impl.MetricsSystemImpl: NameNode metrics system shutdown complete.
15/02/02 08:06:17 FATAL namenode.NameNode: Failed to start namenode.
java.net.BindException: Port in use: hadoop1:50070
at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:890)
at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:826)
at org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:142)
at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:695)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:585)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:754)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:738)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1427)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1493)
Caused by: java.net.BindException: Cannot assign requested address
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:444)
at sun.nio.ch.Net.bind(Net.java:436)
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:885)
... 8 more
15/02/02 08:06:17 INFO util.ExitUtil: Exiting with status 1
15/02/02 08:06:17 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at hadoop1.marketstudies.com/192.168.1.125
************************************************************/
我尝试通过设置更改 http 地址端口:
<property>
<name>dfs.namenode.http-address.local1-hadoop.hadoop1</name>
<value>hadoop1:10070</value>
</property>
但是我只用新端口得到了和上面一样的结果:
15/02/02 08:16:51 INFO hdfs.DFSUtil: Starting Web-server for hdfs at: http://hadoop1:10070
...
java.net.BindException: Port in use: hadoop1:10070
...
java.net.BindException: Port in use: hadoop1:10070
这在主名称节点上使用相同的配置。
This Question 似乎与我的问题相似,但答案没有帮助。我尝试将 dfs.http.address
设置为其他内容,但没有任何改变。我相信这是在 HA 中用 dfs.namenode.http-address.ha-name.namenodename
替换的非 HA 配置选项
从这里可以看出,实际上并没有监听http端口:
# netstat -anp |grep LIST
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 946/sshd
tcp 0 0 0.0.0.0:46712 0.0.0.0:* LISTEN 2066/java
tcp 0 0 0.0.0.0:50010 0.0.0.0:* LISTEN 28892/java
tcp 0 0 0.0.0.0:50075 0.0.0.0:* LISTEN 28892/java
tcp 0 0 0.0.0.0:8480 0.0.0.0:* LISTEN 1471/java
tcp 0 0 0.0.0.0:10050 0.0.0.0:* LISTEN 2358/zabbix_agentd
tcp 0 0 0.0.0.0:50020 0.0.0.0:* LISTEN 28892/java
tcp 0 0 0.0.0.0:8485 0.0.0.0:* LISTEN 1471/java
tcp 0 0 0.0.0.0:8040 0.0.0.0:* LISTEN 2066/java
tcp 0 0 0.0.0.0:8042 0.0.0.0:* LISTEN 2066/java
tcp 0 0 127.0.0.1:3306 0.0.0.0:* LISTEN 1020/mysqld
tcp6 0 0 :::22 :::* LISTEN 946/sshd
尝试以 root 用户身份启动,看看是否是监听端口的某种 perms 问题,但给出了同样的错误。
发现问题。这来自该服务器的 IP 地址更改的简短历史,但 /etc/hosts 文件只是附加了新地址而不是替换了它。我认为这会混淆 Hadoop 启动,因为它试图在一个不存在的界面上打开 50070。 "port in use" 的错误让这有点混乱。
下载 osquery https://code.facebook.com/projects/658950180885092
并安装
发出此命令 osqueryi
当提示出现时使用此 sql 命令查看所有 运行 java 进程并找到 pids
SELECT name, path, pid FROM processes where name= "java";
你会在 mac
上得到看起来像这样的东西
+------+--------------------------------------------------------------------------+-------+
| name | path | pid |
+------+--------------------------------------------------------------------------+-------+
| java | /Library/Java/JavaVirtualMachines/jdk1.8.0_45.jdk/Contents/Home/bin/java | 59446 |
| java | /Library/Java/JavaVirtualMachines/jdk1.8.0_45.jdk/Contents/Home/bin/java | 59584 |
| java | /Library/Java/JavaVirtualMachines/jdk1.8.0_45.jdk/Contents/Home/bin/java | 59676 |
| java | /Library/Java/JavaVirtualMachines/jdk1.8.0_45.jdk/Contents/Home/bin/java | 59790 |
+------+--------------------------------------------------------------------------+-------+
对所有进程发出 sudo kill - PID 命令以确保它杀死正在使用的端口 0.0.0.0:50070
然后重试 sbin/start-dfs.sh,namenode 现在应该显示
使用以下命令检查 运行 并使用 java 的进程:
ps aux | grep java
之后,使用上述命令中的 PID 杀死所有与 hadoop 相关的进程,如下所示:
sudo kill -9 PID
这实际上是一个备用 HA 名称节点。它配置了与主服务器相同的设置,hdfs namenode -bootstrapStandby
成功 运行。它开始出现在配置文件中定义的标准 HTTP 端口 50070 上:
<property>
<name>dfs.namenode.http-address.ha-hadoop.namenode2</name>
<value>namenode2:50070</value>
</property>
启动开始OK然后点击:
15/02/02 08:06:17 INFO hdfs.DFSUtil: Starting Web-server for hdfs at: http://hadoop1:50070
15/02/02 08:06:17 INFO mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
15/02/02 08:06:17 INFO http.HttpRequestLog: Http request log for http.requests.namenode is not defined
15/02/02 08:06:17 INFO http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
15/02/02 08:06:17 INFO http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs
15/02/02 08:06:17 INFO http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
15/02/02 08:06:17 INFO http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
15/02/02 08:06:17 INFO http.HttpServer2: Added filter 'org.apache.hadoop.hdfs.web.AuthFilter' (class=org.apache.hadoop.hdfs.web.AuthFilter)
15/02/02 08:06:17 INFO http.HttpServer2: addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/*
15/02/02 08:06:17 INFO http.HttpServer2: HttpServer.start() threw a non Bind IOException
java.net.BindException: Port in use: hadoop1:50070
at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:890)
at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:826)
at org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:142)
at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:695)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:585)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:754)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:738)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1427)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1493)
Caused by: java.net.BindException: Cannot assign requested address
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:444)
at sun.nio.ch.Net.bind(Net.java:436)
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:885)
... 8 more
15/02/02 08:06:17 INFO impl.MetricsSystemImpl: Stopping NameNode metrics system...
15/02/02 08:06:17 INFO impl.MetricsSystemImpl: NameNode metrics system stopped.
15/02/02 08:06:17 INFO impl.MetricsSystemImpl: NameNode metrics system shutdown complete.
15/02/02 08:06:17 FATAL namenode.NameNode: Failed to start namenode.
java.net.BindException: Port in use: hadoop1:50070
at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:890)
at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:826)
at org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:142)
at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:695)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:585)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:754)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:738)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1427)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1493)
Caused by: java.net.BindException: Cannot assign requested address
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:444)
at sun.nio.ch.Net.bind(Net.java:436)
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:885)
... 8 more
15/02/02 08:06:17 INFO util.ExitUtil: Exiting with status 1
15/02/02 08:06:17 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at hadoop1.marketstudies.com/192.168.1.125
************************************************************/
我尝试通过设置更改 http 地址端口:
<property>
<name>dfs.namenode.http-address.local1-hadoop.hadoop1</name>
<value>hadoop1:10070</value>
</property>
但是我只用新端口得到了和上面一样的结果:
15/02/02 08:16:51 INFO hdfs.DFSUtil: Starting Web-server for hdfs at: http://hadoop1:10070
...
java.net.BindException: Port in use: hadoop1:10070
...
java.net.BindException: Port in use: hadoop1:10070
这在主名称节点上使用相同的配置。
This Question 似乎与我的问题相似,但答案没有帮助。我尝试将 dfs.http.address
设置为其他内容,但没有任何改变。我相信这是在 HA 中用 dfs.namenode.http-address.ha-name.namenodename
从这里可以看出,实际上并没有监听http端口:
# netstat -anp |grep LIST
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 946/sshd
tcp 0 0 0.0.0.0:46712 0.0.0.0:* LISTEN 2066/java
tcp 0 0 0.0.0.0:50010 0.0.0.0:* LISTEN 28892/java
tcp 0 0 0.0.0.0:50075 0.0.0.0:* LISTEN 28892/java
tcp 0 0 0.0.0.0:8480 0.0.0.0:* LISTEN 1471/java
tcp 0 0 0.0.0.0:10050 0.0.0.0:* LISTEN 2358/zabbix_agentd
tcp 0 0 0.0.0.0:50020 0.0.0.0:* LISTEN 28892/java
tcp 0 0 0.0.0.0:8485 0.0.0.0:* LISTEN 1471/java
tcp 0 0 0.0.0.0:8040 0.0.0.0:* LISTEN 2066/java
tcp 0 0 0.0.0.0:8042 0.0.0.0:* LISTEN 2066/java
tcp 0 0 127.0.0.1:3306 0.0.0.0:* LISTEN 1020/mysqld
tcp6 0 0 :::22 :::* LISTEN 946/sshd
尝试以 root 用户身份启动,看看是否是监听端口的某种 perms 问题,但给出了同样的错误。
发现问题。这来自该服务器的 IP 地址更改的简短历史,但 /etc/hosts 文件只是附加了新地址而不是替换了它。我认为这会混淆 Hadoop 启动,因为它试图在一个不存在的界面上打开 50070。 "port in use" 的错误让这有点混乱。
下载 osquery https://code.facebook.com/projects/658950180885092
并安装
发出此命令 osqueryi
当提示出现时使用此 sql 命令查看所有 运行 java 进程并找到 pids
SELECT name, path, pid FROM processes where name= "java";
你会在 mac
上得到看起来像这样的东西 +------+--------------------------------------------------------------------------+-------+
| name | path | pid |
+------+--------------------------------------------------------------------------+-------+
| java | /Library/Java/JavaVirtualMachines/jdk1.8.0_45.jdk/Contents/Home/bin/java | 59446 |
| java | /Library/Java/JavaVirtualMachines/jdk1.8.0_45.jdk/Contents/Home/bin/java | 59584 |
| java | /Library/Java/JavaVirtualMachines/jdk1.8.0_45.jdk/Contents/Home/bin/java | 59676 |
| java | /Library/Java/JavaVirtualMachines/jdk1.8.0_45.jdk/Contents/Home/bin/java | 59790 |
+------+--------------------------------------------------------------------------+-------+
对所有进程发出 sudo kill - PID 命令以确保它杀死正在使用的端口 0.0.0.0:50070
然后重试 sbin/start-dfs.sh,namenode 现在应该显示
使用以下命令检查 运行 并使用 java 的进程:
ps aux | grep java
之后,使用上述命令中的 PID 杀死所有与 hadoop 相关的进程,如下所示:
sudo kill -9 PID