使用spark scala远程连接hbase

Connecting hbase remotely using spark scala

我在我的 windows(这是我的本地)中配置了 Hadoop 和 spark,并且我在一个虚拟机(同一台机器)中安装了 cloudera,它里面有 hbase。
我正在尝试使用 spark stream 提取数据并将其放入 vm 中的 hbase 中。

可以这样做吗?

我的尝试:

打包 hbase

import org.apache.hadoop.hbase.HBaseConfiguration
import org.apache.hadoop.hbase.client.{ConnectionFactory,HBaseAdmin,HTable,Put,Get}


object Connect {

  def main(args: Array[String]){
  val conf = HBaseConfiguration.create()
val tablename = "Acadgild_spark_Hbase"

val HbaseConf = HBaseConfiguration.create()
  HbaseConf.set("hbase.zookeeper.quorum","192.168.117.133")
  HbaseConf.set("hbase.zookeeper.property.clientPort","2181")

  val connection = ConnectionFactory.createConnection(HbaseConf);

  val admin = connection.getAdmin();

 val listtables=admin.listTables()

listtables.foreach(println)

  }
}

错误:

18/08/08 21:05:09 INFO ZooKeeper: Initiating client connection, connectString=192.168.117.133:2181 sessionTimeout=90000 watcher=org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda/1357491107@12d1bfb1
18/08/08 21:05:15 INFO ClientCnxn: Opening socket connection to server 192.168.117.133/192.168.117.133:2181. Will not attempt to authenticate using SASL (unknown error)
18/08/08 21:05:15 INFO ClientCnxn: Socket connection established to 192.168.117.133/192.168.117.133:2181, initiating session
18/08/08 21:05:15 INFO ClientCnxn: Session establishment complete on server 192.168.117.133/192.168.117.133:2181, sessionid = 0x16518f57f950012, negotiated timeout = 40000
18/08/08 21:05:16 WARN ConnectionUtils: Can not resolve quickstart.cloudera, please check your network
java.net.UnknownHostException: quickstart.cloudera
    at java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method)
    at java.net.InetAddress.lookupAllHostAddr(Unknown Source)
    at java.net.InetAddress.getAddressesFromNameService(Unknown Source)
    at java.net.InetAddress.getAllByName0(Unknown Source)
    at java.net.InetAddress.getAllByName(Unknown Source)
    at java.net.InetAddress.getAllByName(Unknown Source)
    at java.net.InetAddress.getByName(Unknown Source)
    at org.apache.hadoop.hbase.client.ConnectionUtils.getStubKey(ConnectionUtils.java:233)
    at org.apache.hadoop.hbase.client.ConnectionImplementation$MasterServiceStubMaker.makeStubNoRetries(ConnectionImplementation.java:1126)
    at org.apache.hadoop.hbase.client.ConnectionImplementation$MasterServiceStubMaker.makeStub(ConnectionImplementation.java:1148)
    at org.apache.hadoop.hbase.client.ConnectionImplementation.getKeepAliveMasterService(ConnectionImplementation.java:1213)
    at org.apache.hadoop.hbase.client.ConnectionImplementation.getMaster(ConnectionImplementation.java:1202)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:57)
    at org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:105)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3055)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3047)
    at org.apache.hadoop.hbase.client.HBaseAdmin.listTables(HBaseAdmin.java:460)
    at org.apache.hadoop.hbase.client.HBaseAdmin.listTables(HBaseAdmin.java:444)
    at azure.iothub$.main(iothub.scala:35)
    at azure.iothub.main(iothub.scala)

基于此错误,您不能在代码中使用 quickstart.cloudera,因为网络堆栈正在使用 DNS 尝试访问它,但您的外部路由器不知道您的 VM。


您需要使用 localhost,然后确保虚拟机已正确配置为使用您需要连接的端口。

但是,我认为 Zookeeper 正在 return 将该主机名返回到您的代码中。因此,您必须编辑 Host OS 计算机上的 Hosts 文件以添加一个行项目。

例如

127.0.0.1 localhost quickstart.cloudera

或者,您可以在 zookeeper-shell 或 Cloudera Manager(在 HBase 配置中)中四处寻找,然后将 quickstart.cloudera 编辑为 return 地址 192.168.117.133