Map Reduce 已完成但 pig 作业失败

Map Reduce Completed but pig Job Failed

我最近遇到了这种情况,其中 MapReduce 作业似乎在 RM 中成功,其中 PIG 脚本返回退出代码 8,它指的是 "Throwable thrown (an unexpected exception)"

按要求添加了脚本:

REGISTER '$LIB_LOCATION/*.jar'; 

-- set number of reducers to 200
SET default_parallel $REDUCERS;
SET mapreduce.map.memory.mb 3072;
SET mapreduce.reduce.memory.mb 6144;

SET mapreduce.map.java.opts -Xmx2560m;
SET mapreduce.reduce.java.opts -Xmx5120m;
SET mapreduce.job.queuename dt_pat_merchant;

SET yarn.app.mapreduce.am.command-opts -Xmx5120m;
SET yarn.app.mapreduce.am.resource.mb 6144;

-- load data from EAP data catalog using given ($ENV = PROD)
data = LOAD 'eap-$ENV://event'
-- using a custom function
USING com.XXXXXX.pig.DataDumpLoadFunc
('{"startDate": "$START_DATE", "endDate" : "$END_DATE", "timeType" : "$TIME_TYPE", "fileStreamType":"$FILESTREAM_TYPE", "attributes": { "all": "true" } }', '$MAPPING_XML_FILE_PATH');

-- filter out null context entity records
filtered = FILTER data BY (attributes#'context_id' IS NOT NULL);

-- group data by session id
session_groups = GROUP filtered BY attributes#'context_id';

-- flatten events
flattened_events = FOREACH session_groups GENERATE FLATTEN(filtered);

-- remove the output directory if exists
RMF $OUTPUT_PATH;

-- store results in specified output location
STORE flattened_events INTO '$OUTPUT_PATH' USING com.XXXX.data.catalog.pig.EventStoreFunc();

而且我可以在 pig 日志中看到 "ERROR 2998: Unhandled internal error. GC overhead limit exceeded"。(下面的日志)

Pig Stack Trace
---------------
ERROR 2998: Unhandled internal error. GC overhead limit exceeded

java.lang.OutOfMemoryError: GC overhead limit exceeded
        at org.apache.hadoop.mapreduce.FileSystemCounter.values(FileSystemCounter.java:23)
        at org.apache.hadoop.mapreduce.counters.FileSystemCounterGroup.findCounter(FileSystemCounterGroup.java:219)
        at org.apache.hadoop.mapreduce.counters.FileSystemCounterGroup.findCounter(FileSystemCounterGroup.java:199)
        at org.apache.hadoop.mapreduce.counters.FileSystemCounterGroup.findCounter(FileSystemCounterGroup.java:210)
        at org.apache.hadoop.mapreduce.counters.AbstractCounters.findCounter(AbstractCounters.java:154)
        at org.apache.hadoop.mapreduce.TypeConverter.fromYarn(TypeConverter.java:241)
        at org.apache.hadoop.mapreduce.TypeConverter.fromYarn(TypeConverter.java:370)
        at org.apache.hadoop.mapreduce.TypeConverter.fromYarn(TypeConverter.java:391)
        at org.apache.hadoop.mapred.ClientServiceDelegate.getTaskReports(ClientServiceDelegate.java:451)
        at org.apache.hadoop.mapred.YARNRunner.getTaskReports(YARNRunner.java:594)
        at org.apache.hadoop.mapreduce.Job.run(Job.java:545)
        at org.apache.hadoop.mapreduce.Job.run(Job.java:543)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
        at org.apache.hadoop.mapreduce.Job.getTaskReports(Job.java:543)
        at org.apache.pig.backend.hadoop.executionengine.shims.HadoopShims.getTaskReports(HadoopShims.java:235)
        at org.apache.pig.tools.pigstats.mapreduce.MRJobStats.addMapReduceStatistics(MRJobStats.java:352)
        at org.apache.pig.tools.pigstats.mapreduce.MRPigStatsUtil.addSuccessJobStats(MRPigStatsUtil.java:233)
        at org.apache.pig.tools.pigstats.mapreduce.MRPigStatsUtil.accumulateStats(MRPigStatsUtil.java:165)
        at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher.launchPig(MapReduceLauncher.java:360)
        at org.apache.pig.backend.hadoop.executionengine.HExecutionEngine.launchPig(HExecutionEngine.java:282)
        at org.apache.pig.PigServer.launchPlan(PigServer.java:1431)
        at org.apache.pig.PigServer.executeCompiledLogicalPlan(PigServer.java:1416)
        at org.apache.pig.PigServer.execute(PigServer.java:1405)
        at org.apache.pig.PigServer.executeBatch(PigServer.java:456)
        at org.apache.pig.PigServer.executeBatch(PigServer.java:439)
        at org.apache.pig.tools.grunt.GruntParser.executeBatch(GruntParser.java:171)
        at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:234)
        at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:205)
        at org.apache.pig.tools.grunt.Grunt.exec(Grunt.java:81)
        at org.apache.pig.Main.run(Main.java:624)

pig 脚本中的配置如下所示:

SET default_parallel 200;
SET mapreduce.map.memory.mb 3072;
SET mapreduce.reduce.memory.mb 6144;

SET mapreduce.map.java.opts -Xmx2560m;
SET mapreduce.reduce.java.opts -Xmx5120m;
SET mapreduce.job.queuename dt_pat_merchant;

SET yarn.app.mapreduce.am.command-opts -Xmx5120m;
SET yarn.app.mapreduce.am.resource.mb 6144;

集群 RM 中的作业状态显示作业成功[不能 post 图像,因为我的声誉太低;)]

这个问题经常发生,我们必须重新启动作业才能成功。

请告诉我一个解决方法。

PS:作业所在的集群 运行 是世界上最大的集群之一,所以不用担心资源或存储 space 我说。

谢谢

来自oracle docs

垃圾回收后,如果 Java 进程花费超过其大约 98% 的时间进行垃圾回收,并且如果它恢复的堆空间不到 2% 并且到目前为止一直在执行最后 5 次(编译时间常数)连续垃圾收集,然后抛出 java.lang.OutOfMemoryError 可以使用命令行标志 -XX:-UseGCOverheadLimit[=11= 关闭 GC 开销限制超出的 java.lang.OutOfMemoryError 异常]

如文档中所述,您可以关闭此异常或增加堆大小。

你能在这里添加你的 pig 脚本吗?

我认为,您收到此错误是因为猪本身(不是映射器和缩减器)无法处理输出。 如果你使用 DUMP 操作你的脚本,那么首先尝试限制显示的数据集。假设您的数据有一个 X 别名。尝试:

temp = LIMIT X 1;
DUMP temp;

因此,您将只看到一条记录并节省一些资源。您也可以执行 STORE 操作(请参阅猪手册中的操作方法)。

显然,您可以将猪的堆大小配置得更大,但是猪的堆大小不是mapreduce.mapmapreduce.reduce。使用 PIG_HEAPSIZE 环境变量来做到这一点。