Ambari HDP 2.2:端口 8020 连接被拒绝
Ambari HDP 2.2: Port 8020 Connection refused
所以我在一个由三节点 AWS EC2 实例组成的集群上安装了 ambari。所有服务都显示为绿色,即它们似乎都运行良好。我可以进行所有 HDFS 文件操作。但是,每当我尝试在实例上 运行 一个简单的 wordcount 程序时,它说它无法到达端口 8020。
$ hadoop jar /usr/hdp/2.2.6.3-1/hadoop-mapreduce/hadoop-mapreduce-examples-2.6.0.2.2.6.3-1.jar wordcount /tmp/wordcount/in /tmp/wordcount/out
15/08/21 05:28:02 INFO impl.TimelineClientImpl: Timeline service address: http://<fqdn for the namenode>:8188/ws/v1/timeline/
15/08/21 05:28:02 INFO client.RMProxy: Connecting to ResourceManager at <fqdn for the namenode>/10.0.0.55:8050
java.io.FileNotFoundException: File does not exist: hdfs://<fqdn for the namenode>:8020/hdp/apps/2.2.6.3-1/mapreduce/mapreduce.tar.gz
at org.apache.hadoop.fs.Hdfs.getFileStatus(Hdfs.java:137)
at org.apache.hadoop.fs.AbstractFileSystem.resolvePath(AbstractFileSystem.java:460)
at org.apache.hadoop.fs.FileContext.next(FileContext.java:2180)
at org.apache.hadoop.fs.FileContext.next(FileContext.java:2176)
at org.apache.hadoop.fs.FSLinkResolver.resolve(FSLinkResolver.java:90)
at org.apache.hadoop.fs.FileContext.resolve(FileContext.java:2176)
at org.apache.hadoop.fs.FileContext.resolvePath(FileContext.java:595)
at org.apache.hadoop.mapreduce.JobSubmitter.addMRFrameworkToDistributedCache(JobSubmitter.java:753)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:435)
at org.apache.hadoop.mapreduce.Job.run(Job.java:1296)
at org.apache.hadoop.mapreduce.Job.run(Job.java:1293)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1293)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1314)
at org.apache.hadoop.examples.WordCount.main(WordCount.java:87)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:74)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
FQDN 是创建集群时使用的命令 hostname -f
的输出。
我也尝试过远程登录,但连接也被拒绝。
bt-prod-dev-02@ip-10-0-0-55:~$ telnet localhost 8020
Trying 127.0.0.1...
telnet: Unable to connect to remote host: Connection refused
我不知道去哪里找。
这是一个简单的 "FileNotFoundException",我原以为 ambari 安装会注意这一点。因此,一旦我将文件 /usr/hdp/2.2.6.3-1/hadoop/mapreduce.tar.gz
放入 hdfs://<fqdn for the namenode>:8020/hdp/apps/2.2.6.3-1/mapreduce/
.
中,问题就解决了
所以我在一个由三节点 AWS EC2 实例组成的集群上安装了 ambari。所有服务都显示为绿色,即它们似乎都运行良好。我可以进行所有 HDFS 文件操作。但是,每当我尝试在实例上 运行 一个简单的 wordcount 程序时,它说它无法到达端口 8020。
$ hadoop jar /usr/hdp/2.2.6.3-1/hadoop-mapreduce/hadoop-mapreduce-examples-2.6.0.2.2.6.3-1.jar wordcount /tmp/wordcount/in /tmp/wordcount/out
15/08/21 05:28:02 INFO impl.TimelineClientImpl: Timeline service address: http://<fqdn for the namenode>:8188/ws/v1/timeline/
15/08/21 05:28:02 INFO client.RMProxy: Connecting to ResourceManager at <fqdn for the namenode>/10.0.0.55:8050
java.io.FileNotFoundException: File does not exist: hdfs://<fqdn for the namenode>:8020/hdp/apps/2.2.6.3-1/mapreduce/mapreduce.tar.gz
at org.apache.hadoop.fs.Hdfs.getFileStatus(Hdfs.java:137)
at org.apache.hadoop.fs.AbstractFileSystem.resolvePath(AbstractFileSystem.java:460)
at org.apache.hadoop.fs.FileContext.next(FileContext.java:2180)
at org.apache.hadoop.fs.FileContext.next(FileContext.java:2176)
at org.apache.hadoop.fs.FSLinkResolver.resolve(FSLinkResolver.java:90)
at org.apache.hadoop.fs.FileContext.resolve(FileContext.java:2176)
at org.apache.hadoop.fs.FileContext.resolvePath(FileContext.java:595)
at org.apache.hadoop.mapreduce.JobSubmitter.addMRFrameworkToDistributedCache(JobSubmitter.java:753)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:435)
at org.apache.hadoop.mapreduce.Job.run(Job.java:1296)
at org.apache.hadoop.mapreduce.Job.run(Job.java:1293)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1293)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1314)
at org.apache.hadoop.examples.WordCount.main(WordCount.java:87)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:74)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
FQDN 是创建集群时使用的命令 hostname -f
的输出。
我也尝试过远程登录,但连接也被拒绝。
bt-prod-dev-02@ip-10-0-0-55:~$ telnet localhost 8020
Trying 127.0.0.1...
telnet: Unable to connect to remote host: Connection refused
我不知道去哪里找。
这是一个简单的 "FileNotFoundException",我原以为 ambari 安装会注意这一点。因此,一旦我将文件 /usr/hdp/2.2.6.3-1/hadoop/mapreduce.tar.gz
放入 hdfs://<fqdn for the namenode>:8020/hdp/apps/2.2.6.3-1/mapreduce/
.