无法访问 Hive 内部表-AccessControlException

Cannot access Hive internal tables-AccessControlException

我的用户 ID 和我的团队无法访问配置单元数据库中的任何内部 table。当我们在 HUE 和 'CLI' 中启动查询时,我们得到

'AccessControlException', please find the log below,

    INFO  : set mapreduce.job.reduces=<number> INFO  : Cleaning up the staging area maprfs:/var/mapr/cluster/yarn/rm/staging/keswara/.staging/job_1494760161412_0139 

ERROR : Job Submission failed with exception org.apache.hadoop.security.AccessControlException
  (User keswara(user id 1802830393)  does not have access to 
   maprfs:///user/hive/warehouse/bistore_sit.db/wt_consumer/d_partition_number=0/000114_0)'
     org.apache.hadoop.security.AccessControlException: User keswara(user id 1802830393)  does not have access to maprfs:///user/hive/warehouse/bistore_sit.db/wt_consumer/d_partition_number=0/000114_0   
     at com.mapr.fs.MapRFileSystem.getMapRFileStatus(MapRFileSystem.java:1320)   
     at com.mapr.fs.MapRFileSystem.getFileStatus(MapRFileSystem.java:942)    
     at org.apache.hadoop.fs.FileSystem.getFileBlockLocations(FileSystem.java:741)  
     at org.apache.hadoop.fs.FileSystem.next(FileSystem.java:1762)   
     at org.apache.hadoop.fs.FileSystem.next(FileSystem.java:1747)      at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:307)      at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:265)      at org.apache.hadoop.hive.shims.Hadoop23Shims.listStatus(Hadoop23Shims.java:148)      at org.apache.hadoop.mapreduce.lib.input.CombineFileInputFormat.getSplits(CombineFileInputFormat.java:218)      at org.apache.hadoop.mapred.lib.CombineFileInputFormat.getSplits(CombineFileInputFormat.java:75)      at org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileInputFormatShim.getSplits(HadoopShimsSecure.java:310)      at org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getCombineSplits(CombineHiveInputFormat.java:472)      at org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getSplits(CombineHiveInputFormat.java:573)      at org.apache.hadoop.mapreduce.JobSubmitter.writeOldSplits(JobSubmitter.java:331)      at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:323)      at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:199)      at org.apache.hadoop.mapreduce.Job.run(Job.java:1290)   
   at org.apache.hadoop.mapreduce.Job.run(Job.java:1287)   
  at java.security.AccessController.doPrivileged(Native Method)   
   at javax.security.auth.Subject.doAs(Subject.java:421)    
  at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1595)      at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287)   
  at org.apache.hadoop.mapred.JobClient.run(JobClient.java:562)    
  at org.apache.hadoop.mapred.JobClient.run(JobClient.java:557)  
    at java.security.AccessController.doPrivileged(Native Method)   
   at javax.security.auth.Subject.doAs(Subject.java:421)   
   at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1595)     
 at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:557)      at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:548)      at org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:431)      at org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:137)      at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:160)     
at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:88)      at org.apache.hadoop.hive.ql.exec.TaskRunner.run(TaskRunner.java:75)

任何用户现在都无法访问内部 table,我也是 mapr 组和 sudo 用户的一部分!

并且 table 和分区所有权属于 mapr 组,不过权限看起来不错!

[mapr@SAN2LPMR03 mapr]$ hadoop fs -ls /user/hive/warehouse/bistore.db/wt_consumer
Found 1 items
drwxrwxrwt - mapr mapr 1 2017-03-24 11:51 /user/hive/warehouse/bistore.db/wt_consumer/d_partition_number=__HIVE_DEFAULT_PARTITION__

请帮我解决这个问题!非常感谢您的帮助!

如果 table 为 parquet 格式,则 table 的文件将仅对创建 table 的用户具有写入权限。

为此,您可以使用如下语句更改该文件的用户权限

hdfs dfs -chomd 777 /user/hive/warehouse/bistore_sit.db/wt_con‌​sumer/d_partitio‌​n_nu‌​mber=0/000114_‌​0/*

此语句将授予所有用户对该特定文件的所有权限。

我在测试 CSVparquet 格式的某些 table 时注意到以下内容。

当您以 CSV 格式创建配置单元 table 时,table 将对有权访问您所在组的所有用户拥有 777 权限。

但是当以 parquet 格式创建配置单元 table 时,只有创建 table 的用户才具有写入权限。我认为它必须使用镶木地板格式

[root@psnode44 hive-2.1]# hadoop fs -ls /user/hive/warehouse/

找到 1 件商品 drwxrw-rw- - mapr mapr 2 2017-06-28 12:49 /user/hive/warehouse/test

0: jdbc:hive2://10.20.30.44:10000/> select *来自测试;

错误:java.io.IOException:org.apache.hadoop.security.AccessControlException:用户 basa(用户 ID 5005)无权访问 maprfs:/user/hive/warehouse/test(state=,code=0)

[root@psnode44 hive-2.1]# hadoop fs -ls /user/hive/warehouse/

找到 1 件商品 drwxrwxrwx - mapr mapr 2 2017-06-28 12:49 /user/hive/warehouse/test

想了想,我更改了 warehouse 上的 chmod,仍然出现同样的错误。

[root@psnode44 hive-2.1]# hadoop fs -chmod -R 777 /user/hive/warehouse/

[root@psnode44 hive-2.1]# hadoop fs -ls /user/hive/warehouse/

找到 1 件商品 drwxrwxrwx - mapr mapr 2 2017-06-28 12:49 /user/hive/warehouse/test

0: jdbc:hive2://10.20.30.44:10000/> select *来自测试;

错误:java.io.IOException:org.apache.hadoop.security.AccessControlException:用户 basa(用户 ID 5005)无权访问 maprfs:/user/hive/warehouse/test(state=,code=0)