Hive 查询在 Tez 引擎中失败但以 MR 模式运行

Hive query failed in Tez engine but runing in MR mode

我已将我的 Hive 引擎更改为 Tez 并想 运行 使用 tez 进行查询,但查询仅对 hadoop 和 hive 用户执行,并且当我在直线中更改我的用户 (user51) 或 Hue 查询失败时。 但是当 Hive 引擎是 MR 时,相同的查询 运行ning 对 user51 没问题。

以下是带有错误调试日志的场景。

为所有用户工作

SET hive.execution.engine=mr;
SELECT count(*) FROM db.mytable;

只为 hadoop 和 hive 用户工作

SET hive.execution.engine=tez;
SELECT count(*) FROM db.mytable;

其他用户(如 user51)的查询失败

SET hive.execution.engine=tez;
SELECT count(*) FROM db.mytable;

错误日志

INFO  [HiveServer2-Background-Pool: Thread-643([])]: client.TezClientUtils (TezClientUtils.java:setupTezJarsLocalResources(178)) - Using tez.lib.uris value from configuration: hdfs:///apps/tez/tez.tar.gz
INFO  [HiveServer2-Background-Pool: Thread-643([])]: client.TezClientUtils (TezClientUtils.java:setupTezJarsLocalResources(180)) - Using tez.lib.uris.classpath value from configuration: null
DEBUG [IPC Parameter Sending Thread #13([])]: ipc.Client (Client.java:run(1117)) - IPC Client (1005331061) connection to ip-10-4-22-xx.ec2.internal/10.4.xx.xx:8020 from user51 sending #952 org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo
DEBUG [IPC Client (1005331061) connection to ip-10-4-22-xx.ec2.internal/10.4.xx.xx:8020 from user51([])]: ipc.Client (Client.java:receiveRpcResponse(1171)) - IPC Client (1005331061) connection to ip-10-4-22-xx.ec2.internal/10.4.xx.xx:8020 from user51 got value #952
DEBUG [HiveServer2-Background-Pool: Thread-643([])]: ipc.ProtobufRpcEngine (ProtobufRpcEngine.java:invoke(248)) - Call: getFileInfo took 1ms
DEBUG [IPC Parameter Sending Thread #13([])]: ipc.Client (Client.java:run(1117)) - IPC Client (1005331061) connection to ip-10-4-22-xx.ec2.internal/10.4.xx.xx:8020 from user51 sending #953 org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo
DEBUG [IPC Client (1005331061) connection to ip-10-4-22-xx.ec2.internal/10.4.xx.xx:8020 from user51([])]: ipc.Client (Client.java:receiveRpcResponse(1171)) - IPC Client (1005331061) connection to ip-10-4-22-xx.ec2.internal/10.4.xx.xx:8020 from user51 got value #953
DEBUG [HiveServer2-Background-Pool: Thread-643([])]: ipc.ProtobufRpcEngine (ProtobufRpcEngine.java:invoke(248)) - Call: getFileInfo took 1ms
DEBUG [IPC Parameter Sending Thread #13([])]: ipc.Client (Client.java:run(1117)) - IPC Client (1005331061) connection to ip-10-4-22-xx.ec2.internal/10.4.xx.xx:8020 from user51 sending #954 org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo
DEBUG [IPC Client (1005331061) connection to ip-10-4-22-xx.ec2.internal/10.4.xx.xx:8020 from user51([])]: ipc.Client (Client.java:receiveRpcResponse(1171)) - IPC Client (1005331061) connection to ip-10-4-22-xx.ec2.internal/10.4.xx.xx:8020 from user51 got value #954
DEBUG [HiveServer2-Background-Pool: Thread-643([])]: ipc.ProtobufRpcEngine (ProtobufRpcEngine.java:invoke(248)) - Call: getFileInfo took 0ms
DEBUG [IPC Parameter Sending Thread #13([])]: ipc.Client (Client.java:run(1117)) - IPC Client (1005331061) connection to ip-10-4-22-xx.ec2.internal/10.4.xx.xx:8020 from user51 sending #955 org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo
DEBUG [IPC Client (1005331061) connection to ip-10-4-22-xx.ec2.internal/10.4.xx.xx:8020 from user51([])]: ipc.Client (Client.java:receiveRpcResponse(1171)) - IPC Client (1005331061) connection to ip-10-4-22-xx.ec2.internal/10.4.xx.xx:8020 from user51 got value #955
DEBUG [HiveServer2-Background-Pool: Thread-643([])]: ipc.ProtobufRpcEngine (ProtobufRpcEngine.java:invoke(248)) - Call: getFileInfo took 1ms
DEBUG [IPC Parameter Sending Thread #13([])]: ipc.Client (Client.java:run(1117)) - IPC Client (1005331061) connection to ip-10-4-22-xx.ec2.internal/10.4.xx.xx:8020 from user51 sending #956 org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo
DEBUG [IPC Client (1005331061) connection to ip-10-4-22-xx.ec2.internal/10.4.xx.xx:8020 from user51([])]: ipc.Client (Client.java:receiveRpcResponse(1171)) - IPC Client (1005331061) connection to ip-10-4-22-xx.ec2.internal/10.4.xx.xx:8020 from user51 got value #956
DEBUG [HiveServer2-Background-Pool: Thread-643([])]: ipc.ProtobufRpcEngine (ProtobufRpcEngine.java:invoke(248)) - Call: getFileInfo took 1ms
DEBUG [IPC Parameter Sending Thread #13([])]: ipc.Client (Client.java:run(1117)) - IPC Client (1005331061) connection to ip-10-4-22-xx.ec2.internal/10.4.xx.xx:8020 from user51 sending #957 org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo
DEBUG [IPC Client (1005331061) connection to ip-10-4-22-xx.ec2.internal/10.4.xx.xx:8020 from user51([])]: ipc.Client (Client.java:receiveRpcResponse(1171)) - IPC Client (1005331061) connection to ip-10-4-22-xx.ec2.internal/10.4.xx.xx:8020 from user51 got value #957
DEBUG [HiveServer2-Background-Pool: Thread-643([])]: ipc.ProtobufRpcEngine (ProtobufRpcEngine.java:invoke(248)) - Call: getFileInfo took 0ms
DEBUG [IPC Parameter Sending Thread #13([])]: ipc.Client (Client.java:run(1117)) - IPC Client (1005331061) connection to ip-10-4-22-xx.ec2.internal/10.4.xx.xx:8020 from user51 sending #958 org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo
DEBUG [IPC Client (1005331061) connection to ip-10-4-22-xx.ec2.internal/10.4.xx.xx:8020 from user51([])]: ipc.Client (Client.java:receiveRpcResponse(1171)) - IPC Client (1005331061) connection to ip-10-4-22-xx.ec2.internal/10.4.xx.xx:8020 from user51 got value #958
DEBUG [HiveServer2-Background-Pool: Thread-643([])]: ipc.ProtobufRpcEngine (ProtobufRpcEngine.java:invoke(248)) - Call: getFileInfo took 1ms
DEBUG [IPC Parameter Sending Thread #13([])]: ipc.Client (Client.java:run(1117)) - IPC Client (1005331061) connection to ip-10-4-22-xx.ec2.internal/10.4.xx.xx:8020 from user51 sending #959 org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo
DEBUG [IPC Client (1005331061) connection to ip-10-4-22-xx.ec2.internal/10.4.xx.xx:8020 from user51([])]: ipc.Client (Client.java:receiveRpcResponse(1171)) - IPC Client (1005331061) connection to ip-10-4-22-xx.ec2.internal/10.4.xx.xx:8020 from user51 got value #959
DEBUG [HiveServer2-Background-Pool: Thread-643([])]: retry.RetryInvocationHandler (RetryInvocationHandler.java:handleException(366)) - Exception while invoking call #959 ClientNamenodeProtocolTranslatorPB.getFileInfo over null. Not retrying because try once and fail.
org.apache.hadoop.ipc.RemoteException: java.lang.NullPointerException
at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1489) ~[hadoop-common-2.8.3-amzn-1.jar:?]
at org.apache.hadoop.ipc.Client.call(Client.java:1435) ~[hadoop-common-2.8.3-amzn-1.jar:?]
at org.apache.hadoop.ipc.Client.call(Client.java:1345) ~[hadoop-common-2.8.3-amzn-1.jar:?]
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:227) ~[hadoop-common-2.8.3-amzn-1.jar:?]
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116) ~[hadoop-common-2.8.3-amzn-1.jar:?]
at com.sun.proxy.$Proxy31.getFileInfo(Unknown Source) ~[?:?]
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:796) ~[hadoop-hdfs-client-2.8.3-amzn-1.jar:?]
at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source) ~[?:?]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_222]
at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_222]
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:409) [hadoop-common-2.8.3-amzn-1.jar:?]
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:163) ~[hadoop-common-2.8.3-amzn-1.jar:?]
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:155) ~[hadoop-common-2.8.3-amzn-1.jar:?]
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) [hadoop-common-2.8.3-amzn-1.jar:?]
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:346) [hadoop-common-2.8.3-amzn-1.jar:?]
at com.sun.proxy.$Proxy32.getFileInfo(Unknown Source) [?:?]
at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1717) [hadoop-hdfs-client-2.8.3-amzn-1.jar:?]
at org.apache.hadoop.hdfs.DistributedFileSystem.doCall(DistributedFileSystem.java:1437) [hadoop-hdfs-client-2.8.3-amzn-1.jar:?]
at org.apache.hadoop.hdfs.DistributedFileSystem.doCall(DistributedFileSystem.java:1434) [hadoop-hdfs-client-2.8.3-amzn-1.jar:?]
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) [hadoop-common-2.8.3-amzn-1.jar:?]
at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1449) [hadoop-hdfs-client-2.8.3-amzn-1.jar:?]
at org.apache.tez.client.TezClientUtils.checkAncestorPermissionsForAllUsers(TezClientUtils.java:1036) [tez-api-0.8.4.jar:0.8.4]
at org.apache.tez.client.TezClientUtils.addLocalResources(TezClientUtils.java:275) [tez-api-0.8.4.jar:0.8.4]
at org.apache.tez.client.TezClientUtils.setupTezJarsLocalResources(TezClientUtils.java:183) [tez-api-0.8.4.jar:0.8.4]
at org.apache.tez.client.TezClient.getTezJarResources(TezClient.java:1057) [tez-api-0.8.4.jar:0.8.4]
at org.apache.tez.client.TezClient.start(TezClient.java:447) [tez-api-0.8.4.jar:0.8.4]
at org.apache.hadoop.hive.ql.exec.tez.TezSessionState.startSessionAndContainers(TezSessionState.java:376) [hive-exec-2.3.2-amzn-0.jar:2.3.2-amzn-0]
at org.apache.hadoop.hive.ql.exec.tez.TezSessionState.openInternal(TezSessionState.java:323) [hive-exec-2.3.2-amzn-0.jar:2.3.2-amzn-0]
at org.apache.hadoop.hive.ql.exec.tez.TezSessionPoolManager$TezSessionPoolSession.openInternal(TezSessionPoolManager.java:703) [hive-exec-2.3.2-amzn-0.jar:2.3.2-amzn-0]
at org.apache.hadoop.hive.ql.exec.tez.TezSessionState.open(TezSessionState.java:196) [hive-exec-2.3.2-amzn-0.jar:2.3.2-amzn-0]
at org.apache.hadoop.hive.ql.exec.tez.TezTask.updateSession(TezTask.java:303) [hive-exec-2.3.2-amzn-0.jar:2.3.2-amzn-0]
at org.apache.hadoop.hive.ql.exec.tez.TezTask.execute(TezTask.java:168) [hive-exec-2.3.2-amzn-0.jar:2.3.2-amzn-0]
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:199) [hive-exec-2.3.2-amzn-0.jar:2.3.2-amzn-0]
at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100) [hive-exec-2.3.2-amzn-0.jar:2.3.2-amzn-0]
at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2183) [hive-exec-2.3.2-amzn-0.jar:2.3.2-amzn-0]
at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1839) [hive-exec-2.3.2-amzn-0.jar:2.3.2-amzn-0]
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1526) [hive-exec-2.3.2-amzn-0.jar:2.3.2-amzn-0]
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1237) [hive-exec-2.3.2-amzn-0.jar:2.3.2-amzn-0]
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1232) [hive-exec-2.3.2-amzn-0.jar:2.3.2-amzn-0]
at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:255) [hive-service-2.3.2-amzn-2.jar:2.3.2-amzn-2]
at org.apache.hive.service.cli.operation.SQLOperation.access0(SQLOperation.java:91) [hive-service-2.3.2-amzn-2.jar:2.3.2-amzn-2]
at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:348) [hive-service-2.3.2-amzn-2.jar:2.3.2-amzn-2]
at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_222]
at javax.security.auth.Subject.doAs(Subject.java:422) [?:1.8.0_222]
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836) [hadoop-common-2.8.3-amzn-1.jar:?]
at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:362) [hive-service-2.3.2-amzn-2.jar:2.3.2-amzn-2]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_222]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_222]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_222]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_222]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_222]
ERROR [HiveServer2-Background-Pool: Thread-643([])]: exec.Task (TezTask.java:execute(230)) - Failed to execute tez graph.
org.apache.hadoop.ipc.RemoteException: java.lang.NullPointerException
at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1489) ~[hadoop-common-2.8.3-amzn-1.jar:?]
at org.apache.hadoop.ipc.Client.call(Client.java:1435) ~[hadoop-common-2.8.3-amzn-1.jar:?]
at org.apache.hadoop.ipc.Client.call(Client.java:1345) ~[hadoop-common-2.8.3-amzn-1.jar:?]
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:227) ~[hadoop-common-2.8.3-amzn-1.jar:?]
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116) ~[hadoop-common-2.8.3-amzn-1.jar:?]
at com.sun.proxy.$Proxy31.getFileInfo(Unknown Source) ~[?:?]
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:796) ~[hadoop-hdfs-client-2.8.3-amzn-1.jar:?]
at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source) ~[?:?]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_222]
at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_222]
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:409) ~[hadoop-common-2.8.3-amzn-1.jar:?]
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:163) ~[hadoop-common-2.8.3-amzn-1.jar:?]
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:155) ~[hadoop-common-2.8.3-amzn-1.jar:?]
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) ~[hadoop-common-2.8.3-amzn-1.jar:?]
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:346) ~[hadoop-common-2.8.3-amzn-1.jar:?]
at com.sun.proxy.$Proxy32.getFileInfo(Unknown Source) ~[?:?]
at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1717) ~[hadoop-hdfs-client-2.8.3-amzn-1.jar:?]
at org.apache.hadoop.hdfs.DistributedFileSystem.doCall(DistributedFileSystem.java:1437) ~[hadoop-hdfs-client-2.8.3-amzn-1.jar:?]
at org.apache.hadoop.hdfs.DistributedFileSystem.doCall(DistributedFileSystem.java:1434) ~[hadoop-hdfs-client-2.8.3-amzn-1.jar:?]
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) ~[hadoop-common-2.8.3-amzn-1.jar:?]
at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1449) ~[hadoop-hdfs-client-2.8.3-amzn-1.jar:?]
at org.apache.tez.client.TezClientUtils.checkAncestorPermissionsForAllUsers(TezClientUtils.java:1036) ~[tez-api-0.8.4.jar:0.8.4]
at org.apache.tez.client.TezClientUtils.addLocalResources(TezClientUtils.java:275) ~[tez-api-0.8.4.jar:0.8.4]
at org.apache.tez.client.TezClientUtils.setupTezJarsLocalResources(TezClientUtils.java:183) ~[tez-api-0.8.4.jar:0.8.4]
at org.apache.tez.client.TezClient.getTezJarResources(TezClient.java:1057) ~[tez-api-0.8.4.jar:0.8.4]
at org.apache.tez.client.TezClient.start(TezClient.java:447) ~[tez-api-0.8.4.jar:0.8.4]
at org.apache.hadoop.hive.ql.exec.tez.TezSessionState.startSessionAndContainers(TezSessionState.java:376) ~[hive-exec-2.3.2-amzn-0.jar:2.3.2-amzn-0]
at org.apache.hadoop.hive.ql.exec.tez.TezSessionState.openInternal(TezSessionState.java:323) ~[hive-exec-2.3.2-amzn-0.jar:2.3.2-amzn-0]
at org.apache.hadoop.hive.ql.exec.tez.TezSessionPoolManager$TezSessionPoolSession.openInternal(TezSessionPoolManager.java:703) ~[hive-exec-2.3.2-amzn-0.jar:2.3.2-amzn-0]
at org.apache.hadoop.hive.ql.exec.tez.TezSessionState.open(TezSessionState.java:196) ~[hive-exec-2.3.2-amzn-0.jar:2.3.2-amzn-0]
at org.apache.hadoop.hive.ql.exec.tez.TezTask.updateSession(TezTask.java:303) ~[hive-exec-2.3.2-amzn-0.jar:2.3.2-amzn-0]
at org.apache.hadoop.hive.ql.exec.tez.TezTask.execute(TezTask.java:168) [hive-exec-2.3.2-amzn-0.jar:2.3.2-amzn-0]
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:199) [hive-exec-2.3.2-amzn-0.jar:2.3.2-amzn-0]
at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100) [hive-exec-2.3.2-amzn-0.jar:2.3.2-amzn-0]
at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2183) [hive-exec-2.3.2-amzn-0.jar:2.3.2-amzn-0]
at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1839) [hive-exec-2.3.2-amzn-0.jar:2.3.2-amzn-0]
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1526) [hive-exec-2.3.2-amzn-0.jar:2.3.2-amzn-0]
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1237) [hive-exec-2.3.2-amzn-0.jar:2.3.2-amzn-0]
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1232) [hive-exec-2.3.2-amzn-0.jar:2.3.2-amzn-0]
at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:255) [hive-service-2.3.2-amzn-2.jar:2.3.2-amzn-2]
at org.apache.hive.service.cli.operation.SQLOperation.access0(SQLOperation.java:91) [hive-service-2.3.2-amzn-2.jar:2.3.2-amzn-2]
at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:348) [hive-service-2.3.2-amzn-2.jar:2.3.2-amzn-2]
at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_222]
at javax.security.auth.Subject.doAs(Subject.java:422) [?:1.8.0_222]
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836) [hadoop-common-2.8.3-amzn-1.jar:?]
at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:362) [hive-service-2.3.2-amzn-2.jar:2.3.2-amzn-2]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_222]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_222]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_222]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_222]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_222]
ERROR [HiveServer2-Background-Pool: Thread-643([])]: ql.Driver (SessionState.java:printError(1126)) - FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.tez.TezTask

我不知道发生了什么。有人可以帮忙吗?

我终于找到了解决办法。 我们在 hdfs-site.xml 中添加了一些 HDFS 授权 属性 并且在 tez 引擎上执行查询时,tez 在 hdfs 中创建了一些临时文件和目录。 所以我从 hdfs-site.xml 中删除了以下附加属性并重新启动 hadoop 服务。

额外属性

<property>
  <name>dfs.namenode.inode.attributes.provider.class</name>
  <value>org.apache.ranger.authorization.hadoop.RangerHdfsAuthorizer</value>
</property>

希望这对某人有所帮助。