namenode,datanode 不使用 jps 列出
namenode, datanode not list by using jps
环境:ubuntu14.04,hadoop 2.6
在我输入 start-all.sh
和 jps
后,DataNode
没有在终端上列出
>jps
9529 ResourceManager
9652 NodeManager
9060 NameNode
10108 Jps
9384 SecondaryNameNode
根据这个答案:Datanode process not running in Hadoop
我试了最好的方案
bin/stop-all.sh (or stop-dfs.sh and stop-yarn.sh in the 2.x serie)
rm -Rf /app/tmp/hadoop-your-username/*
bin/hadoop namenode -format (or hdfs in the 2.x series)
但是,现在我明白了:
>jps
20369 ResourceManager
26032 Jps
20204 SecondaryNameNode
20710 NodeManager
如你所见,连NameNode
也不见了,请帮帮我
DataNode logs
: https://gist.github.com/fifiteen82726/b561bbd9cdcb9bf36032
NmaeNode logs
: https://gist.github.com/fifiteen82726/02dcf095b5a23c1570b0
mapred-site.xml
:
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
-->
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
更新
coda@ubuntu:/usr/local/hadoop/sbin$ start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
15/04/30 01:07:25 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [localhost]
coda@localhost's password:
localhost: chown: changing ownership of ‘/usr/local/hadoop/logs’: Operation not permitted
localhost: mv: cannot move ‘/usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out.4’ to ‘/usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out.5’: Permission denied
localhost: mv: cannot move ‘/usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out.3’ to ‘/usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out.4’: Permission denied
localhost: mv: cannot move ‘/usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out.2’ to ‘/usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out.3’: Permission denied
localhost: mv: cannot move ‘/usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out.1’ to ‘/usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out.2’: Permission denied
localhost: mv: cannot move ‘/usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out’ to ‘/usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out.1’: Permission denied
localhost: starting namenode, logging to /usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out
localhost: /usr/local/hadoop/sbin/hadoop-daemon.sh: line 159: /usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out: Permission denied
localhost: ulimit -a for user coda
localhost: core file size (blocks, -c) 0
localhost: data seg size (kbytes, -d) unlimited
localhost: scheduling priority (-e) 0
localhost: file size (blocks, -f) unlimited
localhost: pending signals (-i) 3877
localhost: max locked memory (kbytes, -l) 64
localhost: max memory size (kbytes, -m) unlimited
localhost: open files (-n) 1024
localhost: pipe size (512 bytes, -p) 8
localhost: /usr/local/hadoop/sbin/hadoop-daemon.sh: line 177: /usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out: Permission denied
localhost: /usr/local/hadoop/sbin/hadoop-daemon.sh: line 178: /usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out: Permission denied
coda@localhost's password:
localhost: chown: changing ownership of ‘/usr/local/hadoop/logs’: Operation not permitted
localhost: mv: cannot move ‘/usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out.4’ to ‘/usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out.5’: Permission denied
localhost: mv: cannot move ‘/usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out.3’ to ‘/usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out.4’: Permission denied
localhost: mv: cannot move ‘/usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out.2’ to ‘/usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out.3’: Permission denied
localhost: mv: cannot move ‘/usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out.1’ to ‘/usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out.2’: Permission denied
localhost: mv: cannot move ‘/usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out’ to ‘/usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out.1’: Permission denied
localhost: starting datanode, logging to /usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out
localhost: /usr/local/hadoop/sbin/hadoop-daemon.sh: line 159: /usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out: Permission denied
localhost: ulimit -a for user coda
localhost: core file size (blocks, -c) 0
localhost: data seg size (kbytes, -d) unlimited
localhost: scheduling priority (-e) 0
localhost: file size (blocks, -f) unlimited
localhost: pending signals (-i) 3877
localhost: max locked memory (kbytes, -l) 64
localhost: max memory size (kbytes, -m) unlimited
localhost: open files (-n) 1024
localhost: pipe size (512 bytes, -p) 8
localhost: /usr/local/hadoop/sbin/hadoop-daemon.sh: line 177: /usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out: Permission denied
localhost: /usr/local/hadoop/sbin/hadoop-daemon.sh: line 178: /usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out: Permission denied
Starting secondary namenodes [0.0.0.0]
coda@0.0.0.0's password:
0.0.0.0: chown: changing ownership of ‘/usr/local/hadoop/logs’: Operation not permitted
0.0.0.0: secondarynamenode running as process 20204. Stop it first.
15/04/30 01:07:51 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
starting yarn daemons
chown: changing ownership of ‘/usr/local/hadoop/logs’: Operation not permitted
resourcemanager running as process 20369. Stop it first.
coda@localhost's password:
localhost: chown: changing ownership of ‘/usr/local/hadoop/logs’: Operation not permitted
localhost: nodemanager running as process 20710. Stop it first.
coda@ubuntu:/usr/local/hadoop/sbin$ jps
20369 ResourceManager
2934 Jps
20204 SecondaryNameNode
20710 NodeManager
更新
hadoop@ubuntu:/usr/local/hadoop/sbin$ $HADOOP_HOME ./start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
15/05/03 09:32:23 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [localhost]
hadoop@localhost's password:
localhost: starting namenode, logging to /usr/local/hadoop/logs/hadoop-hadoop-namenode-ubuntu.out
hadoop@localhost's password:
localhost: datanode running as process 28584. Stop it first.
Starting secondary namenodes [0.0.0.0]
hadoop@0.0.0.0's password:
0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop/logs/hadoop-hadoop-secondarynamenode-ubuntu.out
15/05/03 09:32:47 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
starting yarn daemons
starting resourcemanager, logging to /usr/local/hadoop/logs/yarn-hadoop-resourcemanager-ubuntu.out
hadoop@localhost's password:
localhost: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-hadoop-nodemanager-ubuntu.out
hadoop@ubuntu:/usr/local/hadoop/sbin$ jps
6842 Jps
28584 DataNode
FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in secureMain
java.io.IOException: All directories in dfs.datanode.data.dir are invalid: "/usr/local/hadoop_store/hdfs/datanode/"
此错误可能是由于 /usr/local/hadoop_store/hdfs/datanode/
文件夹的权限错误。
FATAL org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode.
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /usr/local/hadoop_store/hdfs/namenode is in an inconsistent state: storage directory does not exist or is not accessible.
此错误可能是由于 /usr/local/hadoop_store/hdfs/namenode
文件夹的权限错误或该文件夹不存在。要纠正此问题,请遵循以下选项:
选项一:
如果您没有文件夹 /usr/local/hadoop_store/hdfs
,请按如下方式创建并授予文件夹权限:
sudo mkdir /usr/local/hadoop_store/hdfs
sudo chown -R hadoopuser:hadoopgroup /usr/local/hadoop_store/hdfs
sudo chmod -R 755 /usr/local/hadoop_store/hdfs
将 hadoopuser
和 hadoopgroup
分别更改为您的 hadoop 用户名和 hadoop 组名。现在,尝试启动 hadoop 进程。如果问题仍然存在,请尝试选项 2。
选项二:
删除 /usr/local/hadoop_store/hdfs
文件夹的内容:
sudo rm -r /usr/local/hadoop_store/hdfs/*
更改文件夹权限:
sudo chmod -R 755 /usr/local/hadoop_store/hdfs
现在,启动 hadoop 进程。它应该工作。
NOTE: Post the new logs if error persists.
更新:
如果您还没有创建 hadoop 用户和组,请按照以下步骤操作:
sudo addgroup hadoop
sudo adduser --ingroup hadoop hadoop
现在,更改 /usr/local/hadoop
和 /usr/local/hadoop_store
的所有权:
sudo chown -R hadoop:hadoop /usr/local/hadoop
sudo chown -R hadoop:hadoop /usr/local/hadoop_store
将您的用户更改为 hadoop:
su - hadoop
输入您的 hadoop 用户密码。现在你的终端应该是这样的:
hadoop@ubuntu:$
现在,输入:
$HADOOP_HOME/bin/start-all.sh
或
sh /usr/local/hadoop/bin/start-all.sh
我遇到了类似的问题,jps
没有 显示数据节点。
删除 hdfs
文件夹的内容并更改文件夹权限对我来说很有效。
sudo rm -r /usr/local/hadoop_store/hdfs/*
sudo chmod -R 755 /usr/local/hadoop_store/hdfs
hadoop namenode =format
start-all.sh
jps
设置权限时要记住一件事:----
ssh-keygen -t rsa -P ""
上面的命令应该只在 namenode 中输入。
然后将生成的 public 键添加到所有数据节点
ssh-copy-id -i ~/.ssh/id_rsa.pub
然后按命令
ssh
权限将设置......
之后启动dfs时不需要密码......
面临同样的问题:Namenode 服务未在 Jps 命令中显示
解决方案:由于目录 /usr/local/hadoop_store/hdfs 的权限问题
只需更改权限和格式 namenode 并重新启动 hadoop:
$sudo chmod -R 755 /usr/local/hadoop_store/hdfs
$hadoop 名称节点格式
$start-all.sh
$jps
解决方案是首先停止你的namenode使用
去你的 /usr/local/hadoop
bin/hdfs namenode -format
然后从您的主页中删除 hdfs 和 tmp 目录
mkdir ~/tmp
mkdir ~/hdfs
chmod 750 ~/hdfs
转到 hadoop 目录并启动 hadoop
`sbin/start-dfs.sh`
它将显示数据节点
为此,您需要授予 hdfc 文件夹权限。
然后运行下面的命令:
- 通过命令创建一个组:
sudo adgroup hadoop
- 将您的用户添加到此:
sudo usermod -a -G hadoop "ur_user"
(你可以通过Who命令查看当前用户)
- 现在直接更改此 hadoop_store 的船主:
sudo chown -R "ur_user":"ur_gourp" /usr/local/hadoop_store
- 然后通过以下方式再次格式化名称节点:
hdfs namenode -format
并启动您可以看到结果的所有服务.....现在输入 JPS(它会起作用)。
环境:ubuntu14.04,hadoop 2.6
在我输入 start-all.sh
和 jps
后,DataNode
没有在终端上列出
>jps
9529 ResourceManager
9652 NodeManager
9060 NameNode
10108 Jps
9384 SecondaryNameNode
根据这个答案:Datanode process not running in Hadoop
我试了最好的方案
bin/stop-all.sh (or stop-dfs.sh and stop-yarn.sh in the 2.x serie)
rm -Rf /app/tmp/hadoop-your-username/*
bin/hadoop namenode -format (or hdfs in the 2.x series)
但是,现在我明白了:
>jps
20369 ResourceManager
26032 Jps
20204 SecondaryNameNode
20710 NodeManager
如你所见,连NameNode
也不见了,请帮帮我
DataNode logs
: https://gist.github.com/fifiteen82726/b561bbd9cdcb9bf36032
NmaeNode logs
: https://gist.github.com/fifiteen82726/02dcf095b5a23c1570b0
mapred-site.xml
:
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
-->
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
更新
coda@ubuntu:/usr/local/hadoop/sbin$ start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
15/04/30 01:07:25 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [localhost]
coda@localhost's password:
localhost: chown: changing ownership of ‘/usr/local/hadoop/logs’: Operation not permitted
localhost: mv: cannot move ‘/usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out.4’ to ‘/usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out.5’: Permission denied
localhost: mv: cannot move ‘/usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out.3’ to ‘/usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out.4’: Permission denied
localhost: mv: cannot move ‘/usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out.2’ to ‘/usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out.3’: Permission denied
localhost: mv: cannot move ‘/usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out.1’ to ‘/usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out.2’: Permission denied
localhost: mv: cannot move ‘/usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out’ to ‘/usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out.1’: Permission denied
localhost: starting namenode, logging to /usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out
localhost: /usr/local/hadoop/sbin/hadoop-daemon.sh: line 159: /usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out: Permission denied
localhost: ulimit -a for user coda
localhost: core file size (blocks, -c) 0
localhost: data seg size (kbytes, -d) unlimited
localhost: scheduling priority (-e) 0
localhost: file size (blocks, -f) unlimited
localhost: pending signals (-i) 3877
localhost: max locked memory (kbytes, -l) 64
localhost: max memory size (kbytes, -m) unlimited
localhost: open files (-n) 1024
localhost: pipe size (512 bytes, -p) 8
localhost: /usr/local/hadoop/sbin/hadoop-daemon.sh: line 177: /usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out: Permission denied
localhost: /usr/local/hadoop/sbin/hadoop-daemon.sh: line 178: /usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out: Permission denied
coda@localhost's password:
localhost: chown: changing ownership of ‘/usr/local/hadoop/logs’: Operation not permitted
localhost: mv: cannot move ‘/usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out.4’ to ‘/usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out.5’: Permission denied
localhost: mv: cannot move ‘/usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out.3’ to ‘/usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out.4’: Permission denied
localhost: mv: cannot move ‘/usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out.2’ to ‘/usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out.3’: Permission denied
localhost: mv: cannot move ‘/usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out.1’ to ‘/usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out.2’: Permission denied
localhost: mv: cannot move ‘/usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out’ to ‘/usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out.1’: Permission denied
localhost: starting datanode, logging to /usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out
localhost: /usr/local/hadoop/sbin/hadoop-daemon.sh: line 159: /usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out: Permission denied
localhost: ulimit -a for user coda
localhost: core file size (blocks, -c) 0
localhost: data seg size (kbytes, -d) unlimited
localhost: scheduling priority (-e) 0
localhost: file size (blocks, -f) unlimited
localhost: pending signals (-i) 3877
localhost: max locked memory (kbytes, -l) 64
localhost: max memory size (kbytes, -m) unlimited
localhost: open files (-n) 1024
localhost: pipe size (512 bytes, -p) 8
localhost: /usr/local/hadoop/sbin/hadoop-daemon.sh: line 177: /usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out: Permission denied
localhost: /usr/local/hadoop/sbin/hadoop-daemon.sh: line 178: /usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out: Permission denied
Starting secondary namenodes [0.0.0.0]
coda@0.0.0.0's password:
0.0.0.0: chown: changing ownership of ‘/usr/local/hadoop/logs’: Operation not permitted
0.0.0.0: secondarynamenode running as process 20204. Stop it first.
15/04/30 01:07:51 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
starting yarn daemons
chown: changing ownership of ‘/usr/local/hadoop/logs’: Operation not permitted
resourcemanager running as process 20369. Stop it first.
coda@localhost's password:
localhost: chown: changing ownership of ‘/usr/local/hadoop/logs’: Operation not permitted
localhost: nodemanager running as process 20710. Stop it first.
coda@ubuntu:/usr/local/hadoop/sbin$ jps
20369 ResourceManager
2934 Jps
20204 SecondaryNameNode
20710 NodeManager
更新
hadoop@ubuntu:/usr/local/hadoop/sbin$ $HADOOP_HOME ./start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
15/05/03 09:32:23 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [localhost]
hadoop@localhost's password:
localhost: starting namenode, logging to /usr/local/hadoop/logs/hadoop-hadoop-namenode-ubuntu.out
hadoop@localhost's password:
localhost: datanode running as process 28584. Stop it first.
Starting secondary namenodes [0.0.0.0]
hadoop@0.0.0.0's password:
0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop/logs/hadoop-hadoop-secondarynamenode-ubuntu.out
15/05/03 09:32:47 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
starting yarn daemons
starting resourcemanager, logging to /usr/local/hadoop/logs/yarn-hadoop-resourcemanager-ubuntu.out
hadoop@localhost's password:
localhost: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-hadoop-nodemanager-ubuntu.out
hadoop@ubuntu:/usr/local/hadoop/sbin$ jps
6842 Jps
28584 DataNode
FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in secureMain java.io.IOException: All directories in dfs.datanode.data.dir are invalid: "/usr/local/hadoop_store/hdfs/datanode/"
此错误可能是由于 /usr/local/hadoop_store/hdfs/datanode/
文件夹的权限错误。
FATAL org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode. org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /usr/local/hadoop_store/hdfs/namenode is in an inconsistent state: storage directory does not exist or is not accessible.
此错误可能是由于 /usr/local/hadoop_store/hdfs/namenode
文件夹的权限错误或该文件夹不存在。要纠正此问题,请遵循以下选项:
选项一:
如果您没有文件夹 /usr/local/hadoop_store/hdfs
,请按如下方式创建并授予文件夹权限:
sudo mkdir /usr/local/hadoop_store/hdfs
sudo chown -R hadoopuser:hadoopgroup /usr/local/hadoop_store/hdfs
sudo chmod -R 755 /usr/local/hadoop_store/hdfs
将 hadoopuser
和 hadoopgroup
分别更改为您的 hadoop 用户名和 hadoop 组名。现在,尝试启动 hadoop 进程。如果问题仍然存在,请尝试选项 2。
选项二:
删除 /usr/local/hadoop_store/hdfs
文件夹的内容:
sudo rm -r /usr/local/hadoop_store/hdfs/*
更改文件夹权限:
sudo chmod -R 755 /usr/local/hadoop_store/hdfs
现在,启动 hadoop 进程。它应该工作。
NOTE: Post the new logs if error persists.
更新:
如果您还没有创建 hadoop 用户和组,请按照以下步骤操作:
sudo addgroup hadoop
sudo adduser --ingroup hadoop hadoop
现在,更改 /usr/local/hadoop
和 /usr/local/hadoop_store
的所有权:
sudo chown -R hadoop:hadoop /usr/local/hadoop
sudo chown -R hadoop:hadoop /usr/local/hadoop_store
将您的用户更改为 hadoop:
su - hadoop
输入您的 hadoop 用户密码。现在你的终端应该是这样的:
hadoop@ubuntu:$
现在,输入:
$HADOOP_HOME/bin/start-all.sh
或
sh /usr/local/hadoop/bin/start-all.sh
我遇到了类似的问题,jps
没有 显示数据节点。
删除 hdfs
文件夹的内容并更改文件夹权限对我来说很有效。
sudo rm -r /usr/local/hadoop_store/hdfs/*
sudo chmod -R 755 /usr/local/hadoop_store/hdfs
hadoop namenode =format
start-all.sh
jps
设置权限时要记住一件事:---- ssh-keygen -t rsa -P "" 上面的命令应该只在 namenode 中输入。 然后将生成的 public 键添加到所有数据节点 ssh-copy-id -i ~/.ssh/id_rsa.pub 然后按命令 ssh 权限将设置...... 之后启动dfs时不需要密码......
面临同样的问题:Namenode 服务未在 Jps 命令中显示 解决方案:由于目录 /usr/local/hadoop_store/hdfs 的权限问题 只需更改权限和格式 namenode 并重新启动 hadoop:
$sudo chmod -R 755 /usr/local/hadoop_store/hdfs
$hadoop 名称节点格式
$start-all.sh
$jps
解决方案是首先停止你的namenode使用 去你的 /usr/local/hadoop
bin/hdfs namenode -format
然后从您的主页中删除 hdfs 和 tmp 目录
mkdir ~/tmp
mkdir ~/hdfs
chmod 750 ~/hdfs
转到 hadoop 目录并启动 hadoop
`sbin/start-dfs.sh`
它将显示数据节点
为此,您需要授予 hdfc 文件夹权限。 然后运行下面的命令:
- 通过命令创建一个组:
sudo adgroup hadoop
- 将您的用户添加到此:
sudo usermod -a -G hadoop "ur_user"
(你可以通过Who命令查看当前用户) - 现在直接更改此 hadoop_store 的船主:
sudo chown -R "ur_user":"ur_gourp" /usr/local/hadoop_store
- 然后通过以下方式再次格式化名称节点:
hdfs namenode -format
并启动您可以看到结果的所有服务.....现在输入 JPS(它会起作用)。