SIMPLE 身份验证未启用。 Available:[TOKEN, KERBEROS] -Hbase Master 未能激活

SIMPLE authentication is not enabled. Available:[TOKEN, KERBEROS] -Hbase Master failed to become active

我正在尝试设置一个具有 3 个节点的 HBase 集群。我已经尝试配置 Secure Hbase 一周了,但我仍然遇到错误:

ERROR [Thread-15] master.HMaster: Failed to become active master
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): SIMPLE authentication is not enabled.  Available:[TOKEN, KERBEROS]

我是 运行 Hbase 2.0.5 和 Hadoop 3.1.2。 Secure hadoop 已设置并且似乎运行良好。 我创建了一个 KDC 并设置了一个领域。我用我需要的所有主体创建了一个密钥表,并将其分发到集群中。

这是我的配置文件:

核心-site.xml

<configuration>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://master-host:9000</value>
    </property>
    <property>
      <name>hadoop.security.authentication</name>
      <value>kerberos</value>
    </property>
    <property>
      <name>hadoop.security.authorization</name>
      <value>true</value>
    </property>
</configuration>

hdfs-site.xml

<configuration>
    <property>
      <name>dfs.namenode.name.dir</name>
      <value>/home/hadoop/hadoop-3.1.2/hadoop/data/namenode</value>
      <description>NameNode directory</description>
    </property>
    <property>
      <name>dfs.datanode.data.dir</name>
      <value>/home/hadoop/hadoop-3.1.2/hadoop/data/datanode</value>
      <description>DataNode directory</description>
    </property>
    <property>
        <name>dfs.replication</name>
        <value>2</value>
    </property>
    <property>
      <name>dfs.namenode.secondary.http-address</name>
      <value>node1:50090</value>
      <description>DataNode directory</description>
    </property>

<!-- General HDFS security config -->
    <property>
      <name>dfs.block.access.token.enable</name>
      <value>true</value>
    </property>

<!-- NameNode security config -->
    <property>
      <name>dfs.namenode.keytab.file</name>
      <value>/home/hadoop/hadoop.keytab</value> <!-- path to the HDFS keytab -->
    </property>
    <property>
      <name>dfs.namenode.kerberos.principal</name>
      <value>hadoop/_HOST@MYREALM</value>
    </property>
    <property>
      <name>dfs.namenode.kerberos.internal.spnego.principal</name>
      <value>hadoop/_HOST@MYREALM</value>
    </property>

<!-- Secondary NameNode security config -->
    <property>
      <name>dfs.secondary.namenode.keytab.file</name>
      <value>/home/hadoop/hadoop.keytab</value> <!-- path to the HDFS keytab -->
    </property>
    <property>
      <name>dfs.secondary.namenode.kerberos.principal</name>
      <value>hadoop/_HOST@MYREALM</value>
    </property>
    <property>
      <name>dfs.secondary.namenode.kerberos.internal.spnego.principal</name>
      <value>hadoop/_HOST@MYREALM</value>
    </property>

<!-- DataNode security config -->

    <property>
      <name>dfs.datanode.data.dir.perm</name>
      <value>700</value>
    </property>

    <property>
      <name>dfs.datanode.keytab.file</name>
      <value>/home/hadoop/hadoop.keytab</value> <!-- path to the HDFS keytab -->
    </property>

    <property>
      <name>dfs.datanode.kerberos.principal</name>
      <value>hadoop/_HOST@MYREALM</value>
    </property> 

    <property>
      <name>dfs.data.transfer.protection</name>
      <value>integrity</value>
    </property>

    <property>
      <name>dfs.datanode.address</name>
      <value>0.0.0.0:10019</value>
    </property>

    <property>
      <name>dfs.datanode.http.address</name>
      <value>0.0.0.0:10022</value>
    </property>

    <property>
      <name>dfs.http.policy</name>
      <value>HTTPS_ONLY</value>
    </property>

<!-- JournalNode -->
    <property>
      <name>dfs.journalnode.kerberos.principal</name>
      <value>hadoop/_HOST@MYREALM</value>
    </property>
    <property>
      <name>dfs.journalnode.keytab.file</name>
      <value>/home/hadoop/hadoop.keytab</value>
    </property>

<!-- Web Authentication config -->
    <property>
      <name>dfs.web.authentication.kerberos.principal</name>
      <value>hadoop/_HOST@MYREALM</value>
    </property>
</configuration>

hbase-site.xml

<configuration>
<property>
    <name>hbase.rootdir</name>
    <value>hdfs://master-host:9000/hbase</value>
  </property>
  <property>
    <name>hbase.zookeeper.property.dataDir</name>
    <value>/home/hadoop/zookeeper</value>
  </property>
  <property>
    <name>hbase.unsafe.stream.capability.enforce</name>
    <value>true</value>
    <description>
      Controls whether HBase will check for stream capabilities (hflush/hsync).

      Disable this if you intend to run on LocalFileSystem, denoted by a rootdir
      with the 'file://' scheme, but be mindful of the NOTE below.

      WARNING: Setting this to false blinds you to potential data loss and
      inconsistent system state in the event of process and/or node failures. If
      HBase is complaining of an inability to use hsync or hflush it's most
      likely not a false positive.
    </description>
  </property>
  <property>
    <name>hbase.cluster.distributed</name>
    <value>true</value>
  </property>
  <property>
    <name>hbase.zookeeper.quorum</name>
    <value>master-host,node1</value>
  </property>
  <property>
    <name>hbase.zookeeper.property.clientPort</name>
    <value>2181</value>
  </property>


<!-- security --> 
<!-- Secure Client -->
  <property>
    <name>hbase.security.authentication</name>
    <value>kerberos</value>
  </property>
  <property>
    <name>hbase.security.authorization</name>
    <value>true</value>
  </property>
  <property>
  <name>hbase.coprocessor.region.classes</name>
    <value>org.apache.hadoop.hbase.security.token.TokenProvider</value>
  </property>

  <property>
  <name>hbase.coprocessor.master.classes</name>
    <value>org.apache.hadoop.hbase.security.token.TokenProvider</value>
  </property>

  <property>
    <name>hbase.client.keytab.file</name>
    <value>/home/hadoop/hadoop.keytab</value>
  </property>

  <property>
    <name>hbase.client.keytab.principal</name>
    <value>hadoop/_HOST@MYREALM</value>
  </property>

  <property>
    <name>hbase.regionserver.kerberos.principal</name>
    <value>hadoop/_HOST@MYREALM</value>
  </property>

  <property>
    <name>hbase.regionserver.keytab.file</name>
    <value>/home/hadoop/hadoop.keytab</value>
  </property>

  <property>
    <name>hbase.master.kerberos.principal</name>
    <value>hadoop/_HOST@MYREALM</value>
  </property>

  <property>
    <name>hbase.master.keytab.file</name>
    <value>/home/hadoop/hadoop.keytab</value>
  </property>

</configuration>

Hadoop 和 Hbase 在非安全模式下运行良好,所以我不确定错误来自何处。这是日志(更改名称和隐藏 IP 地址):

2019-07-18 16:06:37,208 DEBUG [Thread-15] zookeeper.ZKUtil: master:16000-0x6c0567f2410000, quorum=XX.XX.XX.X:2181,XX.XX.XX.X:2181, baseZNode=/hbase Set watcher on existing znode=/hbase/master
2019-07-18 16:06:37,209 DEBUG [main-SendThread(XX.XX.XX.X:2181)] zookeeper.ClientCnxn: Reading reply sessionid:0x6c0567f2410000, packet:: clientPath:null serverPath:null finished:false header:: 55,3  replyHeader:: 55,309237645321,0  request:: '/hbase/backup-masters/host-master%2C16000%2C1563458793521,F  response:: s{309237645320,309237645320,1563458783461,1563458783461,0,0,0,30405241488867328,65,0,309237645320} 
2019-07-18 16:06:37,209 INFO  [Thread-15] master.ActiveMasterManager: Deleting ZNode for /hbase/backup-masters/host-master,16000,1563458793521 from backup master directory
2019-07-18 16:06:37,220 DEBUG [main-SendThread(XX.XX.XX.X:2181)] zookeeper.ClientCnxn: Got notification sessionid:0x6c0567f2410000
2019-07-18 16:06:37,220 DEBUG [main-SendThread(XX.XX.XX.X:2181)] zookeeper.ClientCnxn: Got WatchedEvent state:SyncConnected type:NodeDeleted path:/hbase/backup-masters/host-master,16000,1563458793521 for sessionid 0x6c0567f2410000
2019-07-18 16:06:37,221 DEBUG [main-SendThread(XX.XX.XX.X:2181)] zookeeper.ClientCnxn: Reading reply sessionid:0x6c0567f2410000, packet:: clientPath:null serverPath:null finished:false header:: 56,2  replyHeader:: 56,309237645322,0  request:: '/hbase/backup-masters/host-master%2C16000%2C1563458793521,-1  response:: null
2019-07-18 16:06:37,222 DEBUG [main-SendThread(XX.XX.XX.X:2181)] zookeeper.ClientCnxn: Reading reply sessionid:0x6c0567f2410000, packet:: clientPath:null serverPath:null finished:false header:: 57,3  replyHeader:: 57,309237645322,0  request:: '/hbase/master,T  response:: s{309237645321,309237645321,1563458783540,1563458783540,0,0,0,30405241488867328,65,0,309237645321} 
2019-07-18 16:06:37,227 DEBUG [main-EventThread] zookeeper.ZKUtil: master:16000-0x6c0567f2410000, quorum=XX.XX.XX.X:2181,XX.XX.XX.X:2181, baseZNode=/hbase Set watcher on existing znode=/hbase/master
2019-07-18 16:06:37,227 DEBUG [main-EventThread] zookeeper.ZKWatcher: master:16000-0x6c0567f2410000, quorum=XX.XX.XX.X:2181,XX.XX.XX.X:2181, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/host-master,16000,1563458793521
2019-07-18 16:06:37,228 INFO  [Thread-15] master.ActiveMasterManager: Registered as active master=host-master,16000,1563458793521
2019-07-18 16:06:37,368 ERROR [Thread-15] master.HMaster: Failed to become active master
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): SIMPLE authentication is not enabled.  Available:[TOKEN, KERBEROS]
    at org.apache.hadoop.ipc.Client.call(Client.java:1476)
    at org.apache.hadoop.ipc.Client.call(Client.java:1413)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
    at com.sun.proxy.$Proxy18.setSafeMode(Unknown Source)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setSafeMode(ClientNamenodeProtocolTranslatorPB.java:671)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
    at com.sun.proxy.$Proxy19.setSafeMode(Unknown Source)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.hadoop.hbase.fs.HFileSystem.invoke(HFileSystem.java:372)
    at com.sun.proxy.$Proxy20.setSafeMode(Unknown Source)
    at org.apache.hadoop.hdfs.DFSClient.setSafeMode(DFSClient.java:2610)
    at org.apache.hadoop.hdfs.DistributedFileSystem.setSafeMode(DistributedFileSystem.java:1223)
    at org.apache.hadoop.hdfs.DistributedFileSystem.setSafeMode(DistributedFileSystem.java:1207)
    at org.apache.hadoop.hbase.util.FSUtils.isInSafeMode(FSUtils.java:292)
    at org.apache.hadoop.hbase.util.FSUtils.waitOnSafeMode(FSUtils.java:698)
    at org.apache.hadoop.hbase.master.MasterFileSystem.checkRootDir(MasterFileSystem.java:241)
    at org.apache.hadoop.hbase.master.MasterFileSystem.createInitialFileSystemLayout(MasterFileSystem.java:151)
    at org.apache.hadoop.hbase.master.MasterFileSystem.<init>(MasterFileSystem.java:122)
    at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:823)
    at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2241)
    at org.apache.hadoop.hbase.master.HMaster.lambda$run[=14=](HMaster.java:567)
    at java.lang.Thread.run(Thread.java:748)
2019-07-18 16:06:37,373 ERROR [Thread-15] master.HMaster: ***** ABORTING master host-master,16000,1563458793521: Unhandled exception. Starting shutdown. *****
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): SIMPLE authentication is not enabled.  Available:[TOKEN, KERBEROS]
    at org.apache.hadoop.ipc.Client.call(Client.java:1476)
    at org.apache.hadoop.ipc.Client.call(Client.java:1413)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
    at com.sun.proxy.$Proxy18.setSafeMode(Unknown Source)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setSafeMode(ClientNamenodeProtocolTranslatorPB.java:671)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
    at com.sun.proxy.$Proxy19.setSafeMode(Unknown Source)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.hadoop.hbase.fs.HFileSystem.invoke(HFileSystem.java:372)
    at com.sun.proxy.$Proxy20.setSafeMode(Unknown Source)
    at org.apache.hadoop.hdfs.DFSClient.setSafeMode(DFSClient.java:2610)
    at org.apache.hadoop.hdfs.DistributedFileSystem.setSafeMode(DistributedFileSystem.java:1223)
    at org.apache.hadoop.hdfs.DistributedFileSystem.setSafeMode(DistributedFileSystem.java:1207)
    at org.apache.hadoop.hbase.util.FSUtils.isInSafeMode(FSUtils.java:292)
    at org.apache.hadoop.hbase.util.FSUtils.waitOnSafeMode(FSUtils.java:698)
    at org.apache.hadoop.hbase.master.MasterFileSystem.checkRootDir(MasterFileSystem.java:241)
    at org.apache.hadoop.hbase.master.MasterFileSystem.createInitialFileSystemLayout(MasterFileSystem.java:151)
    at org.apache.hadoop.hbase.master.MasterFileSystem.<init>(MasterFileSystem.java:122)
    at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:823)
    at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2241)
    at org.apache.hadoop.hbase.master.HMaster.lambda$run[=14=](HMaster.java:567)
    at java.lang.Thread.run(Thread.java:748)
2019-07-18 16:06:37,381 INFO  [Thread-15] regionserver.HRegionServer: ***** STOPPING region server 'host-master,16000,1563458793521' *****
2019-07-18 16:06:37,382 INFO  [Thread-15] regionserver.HRegionServer: STOPPED: Stopped by Thread-15
2019-07-18 16:06:37,385 DEBUG [main-SendThread(XX.XX.XX.X:2181)] zookeeper.ClientCnxn: Reading reply sessionid:0x6c0567f2410000, packet:: clientPath:null serverPath:null finished:false header:: 58,4  replyHeader:: 58,309237645322,0  request:: '/hbase/master,F  response:: #ffffffff000146d61737465723a31363030305dffffffd4ffffffd4787b2dffffffd34d50425546a1da1170726f746f732d6465762d6d617374657210ffffff807d18ffffffb1ffffffe0ffffff9fffffffabffffffc02d10018ffffff8a7d,s{309237645321,309237645321,1563458783540,1563458783540,0,0,0,30405241488867328,65,0,309237645321} 
2019-07-18 16:06:37,392 DEBUG [main-SendThread(XX.XX.XX.X:2181)] zookeeper.ClientCnxn: Got notification sessionid:0x6c0567f2410000
2019-07-18 16:06:37,392 DEBUG [main-SendThread(XX.XX.XX.X:2181)] zookeeper.ClientCnxn: Got WatchedEvent state:SyncConnected type:NodeDeleted path:/hbase/master for sessionid 0x6c0567f2410000
2019-07-18 16:06:37,393 DEBUG [main-SendThread(XX.XX.XX.X:2181)] zookeeper.ClientCnxn: Reading reply sessionid:0x6c0567f2410000, packet:: clientPath:null serverPath:null finished:false header:: 59,2  replyHeader:: 59,309237645323,0  request:: '/hbase/master,-1  response:: null
2019-07-18 16:06:37,393 DEBUG [main-EventThread] zookeeper.ZKWatcher: master:16000-0x6c0567f2410000, quorum=XX.XX.XX.X:2181,XX.XX.XX.X:2181, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master
2019-07-18 16:06:37,395 DEBUG [main-SendThread(XX.XX.XX.X:2181)] zookeeper.ClientCnxn: Reading reply sessionid:0x6c0567f2410000, packet:: clientPath:null serverPath:null finished:false header:: 60,3  replyHeader:: 60,309237645323,-101  request:: '/hbase/master,T  response::  
2019-07-18 16:06:37,396 DEBUG [main-EventThread] zookeeper.ZKUtil: master:16000-0x6c0567f2410000, quorum=XX.XX.XX.X:2181,XX.XX.XX.X:2181, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master
2019-07-18 16:06:40,205 INFO  [master/host-master:16000] ipc.NettyRpcServer: Stopping server on /XX.XX.XX.X:16000
2019-07-18 16:06:40,205 INFO  [master/host-master:16000] token.AuthenticationTokenSecretManager: Stopping leader election, because: SecretManager stopping
2019-07-18 16:06:40,206 DEBUG [ZKSecretWatcher-leaderElector] zookeeper.ZKLeaderManager: Interrupted waiting on leader
java.lang.InterruptedException
    at java.lang.Object.wait(Native Method)
    at java.lang.Object.wait(Object.java:502)
    at org.apache.hadoop.hbase.zookeeper.ZKLeaderManager.waitToBecomeLeader(ZKLeaderManager.java:143)
    at org.apache.hadoop.hbase.security.token.AuthenticationTokenSecretManager$LeaderElector.run(AuthenticationTokenSecretManager.java:336)
2019-07-18 16:06:40,213 DEBUG [master/host-master:16000] regionserver.HRegionServer: About to register with Master.
2019-07-18 16:06:40,214 INFO  [master/host-master:16000] regionserver.HRegionServer: Stopping infoServer
2019-07-18 16:06:40,236 INFO  [master/host-master:16000] handler.ContextHandler: Stopped o.e.j.w.WebAppContext@2418ba04{/,null,UNAVAILABLE}{file:/home/hadoop/hbase-2.0.5/hbase-webapps/master}
2019-07-18 16:06:40,242 INFO  [master/host-master:16000] server.AbstractConnector: Stopped ServerConnector@c5fadbd{HTTP/1.1,[http/1.1]}{0.0.0.0:16010}
2019-07-18 16:06:40,243 INFO  [master/host-master:16000] handler.ContextHandler: Stopped o.e.j.s.ServletContextHandler@747d1932{/static,file:///home/hadoop/hbase-2.0.5/hbase-webapps/static/,UNAVAILABLE}
2019-07-18 16:06:40,243 INFO  [master/host-master:16000] handler.ContextHandler: Stopped o.e.j.s.ServletContextHandler@6ffa56fa{/logs,file:///home/hadoop/hbase-2.0.5/logs/,UNAVAILABLE}
2019-07-18 16:06:40,248 INFO  [master/host-master:16000] regionserver.HRegionServer: stopping server host-master,16000,1563458793521
2019-07-18 16:06:40,254 INFO  [master/host-master:16000] regionserver.HRegionServer: stopping server host-master,16000,1563458793521; all regions closed.
2019-07-18 16:06:40,254 INFO  [master/host-master:16000] hbase.ChoreService: Chore service for: master/host-master:16000 had [] on shutdown
2019-07-18 16:06:40,254 DEBUG [master/host-master:16000] master.HMaster: Stopping service threads
2019-07-18 16:06:40,256 DEBUG [main-SendThread(XX.XX.XX.X:2181)] zookeeper.ClientCnxn: Reading reply sessionid:0x6c0567f2410000, packet:: clientPath:null serverPath:null finished:false header:: 61,4  replyHeader:: 61,309237645328,-101  request:: '/hbase/master,F  response::  
2019-07-18 16:06:40,259 DEBUG [master/host-master:16000] zookeeper.ZKUtil: master:16000-0x6c0567f2410000, quorum=XX.XX.XX.X:2181,XX.XX.XX.X:2181, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error)
2019-07-18 16:06:40,259 WARN  [master/host-master:16000] master.ActiveMasterManager: Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
2019-07-18 16:06:40,259 ERROR [master/host-master:16000] access.TableAuthManager: Something wrong with the TableAuthManager reference counting: org.apache.hadoop.hbase.security.access.TableAuthManager@680d4a6a whose count is null
2019-07-18 16:06:40,265 DEBUG [main-SendThread(XX.XX.XX.X:2181)] zookeeper.ClientCnxn: Reading reply sessionid:0x6c0567f2410000, packet:: clientPath:null serverPath:null finished:false header:: 62,2  replyHeader:: 62,309237645329,-101  request:: '/hbase/rs/host-master%2C16000%2C1563458793521,-1  response:: null
2019-07-18 16:06:40,266 DEBUG [master/host-master:16000] zookeeper.RecoverableZooKeeper: Node /hbase/rs/host-master,16000,1563458793521 already deleted, retry=false
2019-07-18 16:06:40,267 DEBUG [master/host-master:16000] zookeeper.ZooKeeper: Closing session: 0x6c0567f2410000
2019-07-18 16:06:40,267 DEBUG [master/host-master:16000] zookeeper.ClientCnxn: Closing client for session: 0x6c0567f2410000
2019-07-18 16:06:40,274 DEBUG [main-SendThread(XX.XX.XX.X:2181)] zookeeper.ClientCnxn: Reading reply sessionid:0x6c0567f2410000, packet:: clientPath:null serverPath:null finished:false header:: 63,-11  replyHeader:: 63,309237645330,0  request:: null response:: null
2019-07-18 16:06:40,274 DEBUG [master/host-master:16000] zookeeper.ClientCnxn: Disconnecting client for session: 0x6c0567f2410000
2019-07-18 16:06:40,274 INFO  [master/host-master:16000] zookeeper.ZooKeeper: Session: 0x6c0567f2410000 closed
2019-07-18 16:06:40,275 INFO  [main-EventThread] zookeeper.ClientCnxn: EventThread shut down for session: 0x6c0567f2410000
2019-07-18 16:06:40,276 DEBUG [main-SendThread(XX.XX.XX.X:2181)] zookeeper.ClientCnxn: An exception was thrown while closing send thread for session 0x6c0567f2410000 : Unable to read additional data from server sessionid 0x6c0567f2410000, likely server has closed socket
2019-07-18 16:06:40,276 INFO  [master/host-master:16000] regionserver.HRegionServer: Exiting; stopping=host-master,16000,1563458793521; zookeeper connection closed.

我几乎测试了我在 Internet 上可以找到的所有内容。

好的,我尝试将hdfs-site.xml和core-site.xml复制到hbase conf目录下,问题解决了。

我不确定这是配置 Hbase 的 "proper" 方法,但我宁愿让 Hbase 直接在 hadoop conf 目录中检查 hdfs 和核心站点。知道怎么做吗?