执行 hdfs zkfc 命令时出错
Error executing hdfs zkfc command
我是 hadoop、hdfs 的新手..我要执行以下步骤:
我已经在三个名称节点中启动了 zookeeper:
*vagrant@172:~$ zkServer.sh start
我可以看到状态:
*vagrant@172:~$ zkServer.sh status
结果状态:
JMX enabled by default
Using config: /opt/zookeeper-3.4.6/bin/../conf/zoo.cfg
Mode: follower
用jps命令只出现jps,有时也会出现quaroom:
*vagrant@172:~$ jps
2237 Jps
当我运行下一个命令时。
* vagrant@172:~$ hdfs zkfc -formatZK
16/01/07 16:10:09 INFO zookeeper.ClientCnxn: Opening socket connection to server 172.16.8.192/172.16.8.192:2181. Will not attempt to authenticate using SASL (unknown error)
16/01/07 16:10:10 INFO zookeeper.ClientCnxn: Socket connection established to 172.16.8.192/172.16.8.192:2181, initiating session
16/01/07 16:10:11 INFO zookeeper.ClientCnxn: Session establishment complete on server 172.16.8.192/172.16.8.192:2181, sessionid = 0x2521cd93c970022, negotiated timeout = 6000
Usage: java zkfc [ -formatZK [-force] [-nonInteractive] ]
16/01/07 16:10:11 INFO ha.ActiveStandbyElector: Session connected.
16/01/07 16:10:11 INFO zookeeper.ZooKeeper: Session: 0x2521cd93c970022 closed
16/01/07 16:10:11 INFO zookeeper.ClientCnxn: EventThread shut down
16/01/07 16:10:12 FATAL tools.DFSZKFailoverController: Got a fatal error, exiting now
org.apache.hadoop.HadoopIllegalArgumentException: Bad argument: –formatZK
at org.apache.hadoop.ha.ZKFailoverController.badArg(ZKFailoverController.java:251)
at org.apache.hadoop.ha.ZKFailoverController.doRun(ZKFailoverController.java:214)
at org.apache.hadoop.ha.ZKFailoverController.access[=14=]0(ZKFailoverController.java:61)
at org.apache.hadoop.ha.ZKFailoverController.run(ZKFailoverController.java:172)
at org.apache.hadoop.ha.ZKFailoverController.run(ZKFailoverController.java:168)
at org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:415)
at org.apache.hadoop.ha.ZKFailoverController.run(ZKFailoverController.java:168)
at org.apache.hadoop.hdfs.tools.DFSZKFailoverController.main(DFSZKFailoverController.java:181)
对这个错误的任何帮助对我来说都是很大的帮助。
我的配置如下:
bashrc
###JAVA CONFIGURATION###
JAVA_HOME=/usr/lib/jvm/java-8-oracle
export PATH=$PATH:$JAVA_HOME/bin
###HADOOP CONFIGURATION###
HADOOP_PREFIX=/opt/hadoop-2.7.1/
export PATH=$PATH:$HADOOP_PREFIX/bin:$HADOOP_PREFIX/sbin
###ZOOKEPER###
export PATH=$PATH:/opt/zookeeper-3.4.6/bin
hdfs-site.xml
<configuration>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<property>
<name>dfs.name.dir</name>
<value>file:///hdfs/name</value>
</property>
<property>
<name>dfs.data.dir</name>
<value>file:///hdfs/data</value>
</property>
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
<property>
<name>dfs.nameservices</name>
<value>auto-ha</value>
</property>
<property>
<name>dfs.ha.namenodes.auto-ha</name>
<value>nn01,nn02</value>
</property>
<property>
<name>dfs.namenode.rpc-address.auto-ha.nn01</name>
<value>172.16.8.191:8020</value>
</property>
<property>
<name>dfs.namenode.http-address.auto-ha.nn01</name>
<value>172.16.8.191:50070</value>
</property>
<property>
<name>dfs.namenode.rpc-address.auto-ha.nn02</name>
<value>172.16.8.192:8020</value>
</property>
<property>
<name>dfs.namenode.http-address.auto-ha.nn02</name>
<value>172.16.8.192:50070</value>
</property>
<property>
<name>dfs.namenode.shared.edits.dir</name>
<value>qjournal://172.16.8.191:8485;172.16.8.192:8485;172.16.8.193:8485/auto-ha</value>
</property>
<property>
<name>dfs.journalnode.edits.dir</name>
<value>/hdfs/journalnode</value>
</property>
<property>
<name>dfs.ha.fencing.methods</name>
<value>sshfence</value>
</property>
<property>
<name>dfs.ha.fencing.ssh.private-key-files</name>
<value>/home/vagrant/.ssh/id_rsa</value>
</property>
<property>
<name>dfs.ha.automatic-failover.enabled.auto-ha</name>
<value>true</value>
</property>
<property>
<name>ha.zookeeper.quorum</name>
<value>172.16.8.191:2181,172.16.8.192:2181,172.16.8.193:2181</value>
</property>
</configuration>
核心-site.xml
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://auto-ha</value>
</property>
</configuration>
zoo.cfg
tickTime=2000
dataDir=/opt/ZooData
clientPort=2181
initLimit=5
syncLimit=2
server.1=172.16.8.191:2888:3888
server.2=172.16.8.192:2888:3888
server.3=172.16.8.193:2888:3888
在文件 hdfs-site.xml:
*我已经更改了机器名称的所有IP。
例子:
172.16.8.191 --> machine_Name1
然后在文件etc/hosts中:
*我已经添加了所有 IP 及其各自的名称
现在一切正常。
我是 hadoop、hdfs 的新手..我要执行以下步骤:
我已经在三个名称节点中启动了 zookeeper:
*vagrant@172:~$ zkServer.sh start
我可以看到状态:
*vagrant@172:~$ zkServer.sh status
结果状态:
JMX enabled by default
Using config: /opt/zookeeper-3.4.6/bin/../conf/zoo.cfg
Mode: follower
用jps命令只出现jps,有时也会出现quaroom:
*vagrant@172:~$ jps
2237 Jps
当我运行下一个命令时。
* vagrant@172:~$ hdfs zkfc -formatZK
16/01/07 16:10:09 INFO zookeeper.ClientCnxn: Opening socket connection to server 172.16.8.192/172.16.8.192:2181. Will not attempt to authenticate using SASL (unknown error)
16/01/07 16:10:10 INFO zookeeper.ClientCnxn: Socket connection established to 172.16.8.192/172.16.8.192:2181, initiating session
16/01/07 16:10:11 INFO zookeeper.ClientCnxn: Session establishment complete on server 172.16.8.192/172.16.8.192:2181, sessionid = 0x2521cd93c970022, negotiated timeout = 6000
Usage: java zkfc [ -formatZK [-force] [-nonInteractive] ]
16/01/07 16:10:11 INFO ha.ActiveStandbyElector: Session connected.
16/01/07 16:10:11 INFO zookeeper.ZooKeeper: Session: 0x2521cd93c970022 closed
16/01/07 16:10:11 INFO zookeeper.ClientCnxn: EventThread shut down
16/01/07 16:10:12 FATAL tools.DFSZKFailoverController: Got a fatal error, exiting now
org.apache.hadoop.HadoopIllegalArgumentException: Bad argument: –formatZK
at org.apache.hadoop.ha.ZKFailoverController.badArg(ZKFailoverController.java:251)
at org.apache.hadoop.ha.ZKFailoverController.doRun(ZKFailoverController.java:214)
at org.apache.hadoop.ha.ZKFailoverController.access[=14=]0(ZKFailoverController.java:61)
at org.apache.hadoop.ha.ZKFailoverController.run(ZKFailoverController.java:172)
at org.apache.hadoop.ha.ZKFailoverController.run(ZKFailoverController.java:168)
at org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:415)
at org.apache.hadoop.ha.ZKFailoverController.run(ZKFailoverController.java:168)
at org.apache.hadoop.hdfs.tools.DFSZKFailoverController.main(DFSZKFailoverController.java:181)
对这个错误的任何帮助对我来说都是很大的帮助。
我的配置如下:
bashrc
###JAVA CONFIGURATION###
JAVA_HOME=/usr/lib/jvm/java-8-oracle
export PATH=$PATH:$JAVA_HOME/bin
###HADOOP CONFIGURATION###
HADOOP_PREFIX=/opt/hadoop-2.7.1/
export PATH=$PATH:$HADOOP_PREFIX/bin:$HADOOP_PREFIX/sbin
###ZOOKEPER###
export PATH=$PATH:/opt/zookeeper-3.4.6/bin
hdfs-site.xml
<configuration>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<property>
<name>dfs.name.dir</name>
<value>file:///hdfs/name</value>
</property>
<property>
<name>dfs.data.dir</name>
<value>file:///hdfs/data</value>
</property>
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
<property>
<name>dfs.nameservices</name>
<value>auto-ha</value>
</property>
<property>
<name>dfs.ha.namenodes.auto-ha</name>
<value>nn01,nn02</value>
</property>
<property>
<name>dfs.namenode.rpc-address.auto-ha.nn01</name>
<value>172.16.8.191:8020</value>
</property>
<property>
<name>dfs.namenode.http-address.auto-ha.nn01</name>
<value>172.16.8.191:50070</value>
</property>
<property>
<name>dfs.namenode.rpc-address.auto-ha.nn02</name>
<value>172.16.8.192:8020</value>
</property>
<property>
<name>dfs.namenode.http-address.auto-ha.nn02</name>
<value>172.16.8.192:50070</value>
</property>
<property>
<name>dfs.namenode.shared.edits.dir</name>
<value>qjournal://172.16.8.191:8485;172.16.8.192:8485;172.16.8.193:8485/auto-ha</value>
</property>
<property>
<name>dfs.journalnode.edits.dir</name>
<value>/hdfs/journalnode</value>
</property>
<property>
<name>dfs.ha.fencing.methods</name>
<value>sshfence</value>
</property>
<property>
<name>dfs.ha.fencing.ssh.private-key-files</name>
<value>/home/vagrant/.ssh/id_rsa</value>
</property>
<property>
<name>dfs.ha.automatic-failover.enabled.auto-ha</name>
<value>true</value>
</property>
<property>
<name>ha.zookeeper.quorum</name>
<value>172.16.8.191:2181,172.16.8.192:2181,172.16.8.193:2181</value>
</property>
</configuration>
核心-site.xml
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://auto-ha</value>
</property>
</configuration>
zoo.cfg
tickTime=2000
dataDir=/opt/ZooData
clientPort=2181
initLimit=5
syncLimit=2
server.1=172.16.8.191:2888:3888
server.2=172.16.8.192:2888:3888
server.3=172.16.8.193:2888:3888
在文件 hdfs-site.xml:
*我已经更改了机器名称的所有IP。 例子: 172.16.8.191 --> machine_Name1
然后在文件etc/hosts中:
*我已经添加了所有 IP 及其各自的名称
现在一切正常。