Optimize Hive Query. java.lang.OutOfMemoryError: Java heap space/GC overhead limit exceeded

Optimize Hive Query. java.lang.OutOfMemoryError: Java heap space/GC overhead limit exceeded

由于我将 运行 保留在这个 OOM 错误中,我该如何优化这种形式的查询?或者想出更好的执行计划?如果我删除子字符串子句,查询将正常运行,表明这会占用大量内存。

当作业失败时,直线输出显示 OOM Java 堆 space。在线阅读建议我增加 export HADOOP_HEAPSIZE 但这仍然会导致错误。我尝试的另一件事是增加 hive.tez.container.sizehive.tez.java.opts(tez 堆),但仍然有此错误。在 YARN 日志中,会超出 GC 开销限制,这表明内存不足的组合 and/or 查询计划效率极低,因为它无法收集足够的内存。

我正在使用 Azure HDInsight Interactive Query 4.0。 20 个工作节点、D13v2 8 核和 56GB RAM。

来源table

create external table database.sourcetable(
  a,
  b,
  c,
  ...
  (183 total columns)
  ...
)
PARTITIONED BY ( 
  W string, 
  X int, 
  Y string, 
  Z int
)

目标Table

create external table database.NEWTABLE(
  a,
  b,
  c,
  ...
  (187 total columns)
  ...
  W,
  X,
  Y,
  Z
)
PARTITIONED BY (
  aAAA,
  bBBB
)

查询

insert overwrite table database.NEWTABLE partition(aAAA, bBBB, cCCC)
select
a,
b,
c,
...
(187 total columns)
...
W,
X,
Y,
Z,
cast(a as string) as aAAA, 
from_unixtime(unix_timestamp(b,'yyMMdd'),'yyyyMMdd') as bBBB,
substring(upper(c),1,2) as cCCC
from database.sourcetable

如果一切正常,请尝试在查询末尾添加按分区键分发:

  from database.sourcetable 
  distribute by aAAA, bBBB, cCCC

因此每个reducer只会创建一个分区文件,消耗更少的内存

尝试对分区列进行排序:

SET hive.optimize.sort.dynamic.partition=true;

When enabled, dynamic partitioning column will be globally sorted. This way we can keep only one record writer open for each partition value in the reducer thereby reducing the memory pressure on reducers.

https://cwiki.apache.org/confluence/display/Hive/Configuration+Properties