在 Linux Fedora Hyper-V 虚拟机上独立启动 HBase 时出错

Error when starting HBase standalone on Linux Fedora Hyper-V Virtual Machine

更新 更新 我已经解决了下面的问题,谢谢 Mike 指出,但是,现在当我 运行 命令 "jps" 按照快速入门指南建议检查 HMaster 进程时,我收到错误命令未找到:

我搜索这个,这个命令与 java 有关。因此,这是我机器上的 java 配置:

在 .bashrc 和 .bash_profile:

export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.222.b10-1.fc31.x86_64/jre
export JRE_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.222.b10-1.fc31.x86_64/jre
export PATH=$PATH:$HOME/bin:$JAVA_HOME/bin

在hbase-env.sh:

export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.222.b10-1.fc31.x86_64/jre

我的 java 位置:

[hadoop@new-hbase-shuti logs]$ whereis java
java: /usr/bin/java /usr/lib/java /etc/java /usr/share/java /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.222.b10-1.fc31.x86_64/jre/bin/java /usr/share/man/man1/java.1.gz

我的java版本:

[hadoop@new-hbase-shuti logs]$ java -version
openjdk version "1.8.0_222"
OpenJDK Runtime Environment (build 1.8.0_222-b10)
OpenJDK 64-Bit Server VM (build 25.222-b10, mixed mode)

这是来自 Hbase (hbase-hadoop-master-new-hbase-shuti.log) 的新日志文件:

我按照 quick start guide 只安装独立的 HBase。这是我的配置:

  1. 我不太确定要使用哪个 HBase pkg,但它说要选择稳定的,所以我下载了这个:http://mirrors.standaloneinstaller.com/apache/hbase/stable/hbase-2.2.3-bin.tar.gz
  2. conf/hbase-env.sh 我只有 JAVA_HOME env 路径:
  3. conf/hbase-site.xml
    <configuration>
  <property>
    <name>hbase.rootdir</name>
    <value>file:///home/testuser/hbase</value>
  </property>
  <property>
    <name>hbase.zookeeper.property.dataDir</name>
    <value>/home/testuser/zookeeper</value>
  </property>
  <property>
    <name>hbase.unsafe.stream.capability.enforce</name>
    <value>false</value>
    <description>
      Controls whether HBase will check for stream capabilities (hflush/hsync).

      Disable this if you intend to run on LocalFileSystem, denoted by a rootdir
      with the 'file://' scheme, but be mindful of the NOTE below.

      WARNING: Setting this to false blinds you to potential data loss and
      inconsistent system state in the event of process and/or node failures. If
      HBase is complaining of an inability to use hsync or hflush it's most
      likely not a false positive.
    </description>
  </property>
</configuration>
  1. 然后运行从bin,脚本开始-hbase.sh

  2. 但是我得到这个错误:

    /home/hadoop/hadoop/bin/../libexec/hadoop-functions.sh: line 2360: HADOOP_ORG.APACHE.HADOOP.HBASE.UTIL.GETJAVAPROPERTY_USER: invalid variable name
    /home/hadoop/hadoop/bin/../libexec/hadoop-functions.sh: line 2455: HADOOP_ORG.APACHE.HADOOP.HBASE.UTIL.GETJAVAPROPERTY_OPTS: invalid variable name
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:file:/home/hadoop/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/hadoop/hbase-2-2-3/hbase-2.2.3/lib/client-facing-thirdparty/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
    running master, logging to /home/hadoop/hbase-2-2-3/hbase-2.2.3/bin/../logs/hbase-hadoop-master-new-hbase-shuti.out
    /home/hadoop/hadoop/bin/../libexec/hadoop-functions.sh: line 2360: HADOOP_ORG.APACHE.HADOOP.HBASE.UTIL.GETJAVAPROPERTY_USER: invalid variable name
    /home/hadoop/hadoop/bin/../libexec/hadoop-functions.sh: line 2455: HADOOP_ORG.APACHE.HADOOP.HBASE.UTIL.GETJAVAPROPERTY_OPTS: invalid variable name
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:file:/home/hadoop/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/hadoop/hbase-2-2-3/hbase-2.2.3/lib/client-facing-thirdparty/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]

我还在下面附上了 HBase 的错误日志文件。熟悉 HBase 的人可以帮助我吗?非常感谢您。

来自 "hbase-hadoop-master-new-hbase-shuti.log"

的错误
Thu 26 Mar 2020 08:59:07 PM CET Starting master on new-hbase-shuti
core file size          (blocks, -c) unlimited
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 7523
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 7523
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited
2020-03-26 20:59:07,933 INFO  [main] master.HMaster: STARTING service HMaster
2020-03-26 20:59:07,934 INFO  [main] util.VersionInfo: HBase 2.2.3
2020-03-26 20:59:07,934 INFO  [main] util.VersionInfo: Source code repository git://hao-OptiPlex-7050/home/hao/open_source/hbase revision=6a830d87542b766bd3dc4cfdee28655f62de3974
2020-03-26 20:59:07,934 INFO  [main] util.VersionInfo: Compiled by hao on 2020年 01月 10日 星期五 18:27:51 CST
2020-03-26 20:59:07,934 INFO  [main] util.VersionInfo: From source with checksum 097925184b85f6995e20da5462b10f3f
2020-03-26 20:59:08,190 INFO  [main] master.HMasterCommandLine: Starting a zookeeper cluster
2020-03-26 20:59:08,204 INFO  [main] server.ZooKeeperServer: Server environment:zookeeper.version=3.4.10-39d3a4f269333c922ed3db283be479f9deacaa0f, built on 03/23/2017 10:13 GMT
2020-03-26 20:59:08,205 INFO  [main] server.ZooKeeperServer: Server environment:host.name=new-hbase-shuti.mshome.net
2020-03-26 20:59:08,205 INFO  [main] server.ZooKeeperServer: Server environment:java.version=1.8.0_222
2020-03-26 20:59:08,205 INFO  [main] server.ZooKeeperServer: Server environment:java.vendor=Oracle Corporation
2020-03-26 20:59:08,205 INFO  [main] server.ZooKeeperServer: Server environment:java.home=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.222.b10-1.fc31.x86_64/jre
2020-03-26 20:59:08,205 INFO  [main] server.ZooKeeperServer: vices-core-3.1.3.jar:/home/hadoop/hadoop/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-3.1.3.jar:/home/hadoop/hadoop/share/hadoop/yarn/hadoop-yarn-server-tests-3.1.3.jar:/home/hadoop/hadoop/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-3.1.3.jar:/home/hadoop/hadoop/share/hadoop/yarn/hadoop-yarn-api-3.1.3.jar:/home/hadoop/hadoop/share/hadoop/yarn/hadoop-yarn-client-3.1.3.jar:/home/hadoop/hadoop/share/hadoop/yarn/hadoop-yarn-server-timeline-pluginstorage-3.1.3.jar:/home/hadoop/hadoop/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-3.1.3.jar:/home/hadoop/hadoop/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-3.1.3.jar:/home/hadoop/hadoop/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-3.1.3.jar:/home/hadoop/hadoop/share/hadoop/yarn/hadoop-yarn-server-common-3.1.3.jar:/home/hadoop/hadoop/share/hadoop/yarn/hadoop-yarn-common-3.1.3.jar:/home/hadoop/hbase-2-2-3/hbase-2.2.3/bin/../lib/client-facing-thirdparty/slf4j-log4j12-1.7.25.jar
2020-03-26 20:59:08,205 INFO  [main] server.ZooKeeperServer: Server environment:java.library.path=/home/hadoop/hadoop//lib/native
2020-03-26 20:59:08,205 INFO  [main] server.ZooKeeperServer: Server environment:java.io.tmpdir=/tmp
2020-03-26 20:59:08,205 INFO  [main] server.ZooKeeperServer: Server environment:java.compiler=<NA>
2020-03-26 20:59:08,205 INFO  [main] server.ZooKeeperServer: Server environment:os.name=Linux
2020-03-26 20:59:08,205 INFO  [main] server.ZooKeeperServer: Server environment:os.arch=amd64
2020-03-26 20:59:08,205 INFO  [main] server.ZooKeeperServer: Server environment:os.version=5.3.7-301.fc31.x86_64
2020-03-26 20:59:08,205 INFO  [main] server.ZooKeeperServer: Server environment:user.name=hadoop
2020-03-26 20:59:08,205 INFO  [main] server.ZooKeeperServer: Server environment:user.home=/home/hadoop
2020-03-26 20:59:08,205 INFO  [main] server.ZooKeeperServer: Server environment:user.dir=/home/hadoop/hbase-2-2-3/hbase-2.2.3/bin
2020-03-26 20:59:08,207 ERROR [main] master.HMasterCommandLine: Master exiting
java.io.IOException: Unable to create data directory /home/testuser/zookeeper/zookeeper_0/version-2
    at org.apache.zookeeper.server.persistence.FileTxnSnapLog.<init>(FileTxnSnapLog.java:85)
    at org.apache.zookeeper.server.ZooKeeperServer.<init>(ZooKeeperServer.java:224)
    at org.apache.hadoop.hbase.zookeeper.MiniZooKeeperCluster.startup(MiniZooKeeperCluster.java:229)
    at org.apache.hadoop.hbase.zookeeper.MiniZooKeeperCluster.startup(MiniZooKeeperCluster.java:187)
    at org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:210)
    at org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:140)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
    at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:149)
    at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2940)

来自 "hbase-hadoop-master-new-hbase-shuti.out" 的错误:

/home/hadoop/hadoop/bin/../libexec/hadoop-functions.sh: line 2360: HADOOP_ORG.APACHE.HADOOP.HBASE.UTIL.GETJAVAPROPERTY_USER: invalid variable name
/home/hadoop/hadoop/bin/../libexec/hadoop-functions.sh: line 2455: HADOOP_ORG.APACHE.HADOOP.HBASE.UTIL.GETJAVAPROPERTY_OPTS: invalid variable name
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/hadoop/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/hadoop/hbase-2-2-3/hbase-2.2.3/lib/client-facing-thirdparty/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]

我已更正我的 JAVA_HOME 路径环境以确保它指向 jdk 而不是 jre。

在深入研究... hadoop 之后,我发现,就我而言,它有一些事情要做 具有 ubuntu 用户权限...

vi /opt/hadoop/libexec/hadoop-functions.sh

function hadoop_verify_user_resolves
{
...
}

所以我决定将这些行添加到 /opt/hbase/conf/hbase-env.sh

export HBASE_SSH_OPTS="-p 22 -l daniel"
export HBASE_OPTS="$HBASE_OPTS -XX:+UseConcMarkSweepGC"