Apache Phoenix 无法连接到 HBase
Apache Phoenix unable to connect to HBase
我是 Phoenix 的新用户,可能错过了一些简单的拍额头。
HBase 是向上
21:44:23/浇道 $ps -ef | grep HMaster
501 55936 55922 0 9:50PM ttys014 0:18.12 /Library/Java/JavaVirtualMachines/jdk1.8.0_71.jdk/Contents/Home/bin/java -Dproc_master -XX:OnOutOfMemoryError =kill -9 %p -Djava.net.preferIPv4Stack=true - ..
-Dhbase.security.logger=INFO,RFAS org.apache.hadoop.hbase.master.HMaster 开始
我们可以通过 hbase shell
连接到它并查询内容:
hbase(main):010:0> 扫描 't1'
行列+单元格
r1 列=f1:c1,时间戳=1469077174795,值=val1
0.0370 秒内 1 行
现在我已经将 phoenix 4.4.6
jar 复制到 $HBASE_HOME/lib 目录,重新启动 hbase 并尝试通过 sqlline.py
:
连接
$sqlline.py mellyrn.local:2181
Setting property: [incremental, false]
Setting property: [isolation, TRANSACTION_READ_COMMITTED]
issuing: !connect jdbc:phoenix:mellyrn.local:2181 none none org.apache.phoenix.jdbc.PhoenixDriver
Connecting to jdbc:phoenix:mellyrn.local:2181
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/shared/phoenix-4.7.0-HBase-1.1-bin/phoenix-4.7.0-HBase-1.1-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/local/Cellar/hadoop/2.6.0/libexec/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
16/07/20 22:03:03 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Error: org.apache.hadoop.hbase.DoNotRetryIOException: Class org.apache.phoenix.coprocessor.MetaDataEndpointImpl cannot be loaded Set hbase.table.sanity.checks to false at conf or table descriptor if you want to bypass sanity checks
at org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:1603)
at org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1535)
at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1452)
at org.apache.hadoop.hbase.master.MasterRpcServices.createTable(MasterRpcServices.java:429)
at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService.callBlockingMethod(MasterProtos.java:52195)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2127)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:107)
at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
at org.apache.hadoop.hbase.ipc.RpcExecutor.run(RpcExecutor.java:108)
at java.lang.Thread.run(Thread.java:745) (state=08000,code=101)
org.apache.phoenix.except
..
Caused by: org.apache.hadoop.hbase.DoNotRetryIOException:
org.apache.hadoop.hbase.DoNotRetryIOException: Class
org.apache.phoenix.coprocessor.MetaDataEndpointImpl cannot be loaded Set
hbase.table.sanity.checks to false at conf or table descriptor if you want to
bypass sanity checks
因此,任何关于提出 phoenix
所需内容的提示都会有所帮助。
在 HMaster
上检查 $HBASE_HOME/lib
和 $HBASE_HOME/conf/hbase-site.xml
。
当你启动phoenix时,它会创建4个系统表:
SYSTEM.CATALOG
SYSTEM.FUNCTION
SYSTEM.SEQUENCE
SYSTEM.STATS
Table SYSTEM.CATALOG
和 SYSTEM.FUNCTION
声明使用协处理器 org.apache.phoenix.coprocessor.MetaDataEndpointImpl
,但你的 HMaster 似乎无法加载它。
当 HBase master 无法加载 phoenix 时抛出上述异常server.jar,即使 phoenix 安装说明说只是重启区域服务器,这还不够,复制 phoenixserver.jar到HBase master和backup masters和region servers一样,然后全部重启。
我是 Phoenix 的新用户,可能错过了一些简单的拍额头。
HBase 是向上
21:44:23/浇道 $ps -ef | grep HMaster
501 55936 55922 0 9:50PM ttys014 0:18.12 /Library/Java/JavaVirtualMachines/jdk1.8.0_71.jdk/Contents/Home/bin/java -Dproc_master -XX:OnOutOfMemoryError =kill -9 %p -Djava.net.preferIPv4Stack=true - .. -Dhbase.security.logger=INFO,RFAS org.apache.hadoop.hbase.master.HMaster 开始
我们可以通过
hbase shell
连接到它并查询内容:hbase(main):010:0> 扫描 't1'
行列+单元格 r1 列=f1:c1,时间戳=1469077174795,值=val1 0.0370 秒内 1 行
现在我已经将 phoenix 4.4.6
jar 复制到 $HBASE_HOME/lib 目录,重新启动 hbase 并尝试通过 sqlline.py
:
$sqlline.py mellyrn.local:2181
Setting property: [incremental, false]
Setting property: [isolation, TRANSACTION_READ_COMMITTED]
issuing: !connect jdbc:phoenix:mellyrn.local:2181 none none org.apache.phoenix.jdbc.PhoenixDriver
Connecting to jdbc:phoenix:mellyrn.local:2181
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/shared/phoenix-4.7.0-HBase-1.1-bin/phoenix-4.7.0-HBase-1.1-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/local/Cellar/hadoop/2.6.0/libexec/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
16/07/20 22:03:03 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Error: org.apache.hadoop.hbase.DoNotRetryIOException: Class org.apache.phoenix.coprocessor.MetaDataEndpointImpl cannot be loaded Set hbase.table.sanity.checks to false at conf or table descriptor if you want to bypass sanity checks
at org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:1603)
at org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1535)
at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1452)
at org.apache.hadoop.hbase.master.MasterRpcServices.createTable(MasterRpcServices.java:429)
at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService.callBlockingMethod(MasterProtos.java:52195)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2127)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:107)
at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
at org.apache.hadoop.hbase.ipc.RpcExecutor.run(RpcExecutor.java:108)
at java.lang.Thread.run(Thread.java:745) (state=08000,code=101)
org.apache.phoenix.except
..
Caused by: org.apache.hadoop.hbase.DoNotRetryIOException:
org.apache.hadoop.hbase.DoNotRetryIOException: Class
org.apache.phoenix.coprocessor.MetaDataEndpointImpl cannot be loaded Set
hbase.table.sanity.checks to false at conf or table descriptor if you want to
bypass sanity checks
因此,任何关于提出 phoenix
所需内容的提示都会有所帮助。
在 HMaster
上检查 $HBASE_HOME/lib
和 $HBASE_HOME/conf/hbase-site.xml
。
当你启动phoenix时,它会创建4个系统表:
SYSTEM.CATALOG
SYSTEM.FUNCTION
SYSTEM.SEQUENCE
SYSTEM.STATS
Table SYSTEM.CATALOG
和 SYSTEM.FUNCTION
声明使用协处理器 org.apache.phoenix.coprocessor.MetaDataEndpointImpl
,但你的 HMaster 似乎无法加载它。
当 HBase master 无法加载 phoenix 时抛出上述异常server.jar,即使 phoenix 安装说明说只是重启区域服务器,这还不够,复制 phoenixserver.jar到HBase master和backup masters和region servers一样,然后全部重启。