Hadoop错误日志jvm sqoop

Hadoop error log jvm sqoop

我的错误 - 在 Java 上运行 运行ning 程序 6-8 小时后,我得到了这个日志 hs_err_pid6662.log

还有这个

  [testuser@apus ~]$ sh /home/progr/work/import.sh
  /usr/bin/hadoop: fork: retry: Resource temporarily unavailable
  /usr/bin/hadoop: fork: retry: Resource temporarily unavailable
  /usr/bin/hadoop: fork: retry: Resource temporarily unavailable
  /usr/bin/hadoop: fork: retry: Resource temporarily unavailable
  /usr/bin/hadoop: fork: Resource temporarily unavailable

程序 运行 每五分钟并尝试从 oracle import/export

如何解决这个问题?

# There is insufficient memory for the Java Runtime Environment to continue.
# Cannot create GC thread. Out of system resources.
# Possible reasons:
#   The system is out of physical RAM or swap space
#   In 32 bit mode, the process size limit was hit
# Possible solutions:
#   Reduce memory load on the system
#   Increase physical memory or swap space
#   Check if swap backing store is full
#   Use 64 bit Java on a 64 bit OS
#   Decrease Java heap size (-Xmx/-Xms)
#   Decrease number of Java threads
#   Decrease Java thread stack sizes (-Xss)
#   Set larger code cache with -XX:ReservedCodeCacheSize=
# This output file may be truncated or incomplete.
#
#  Out of Memory Error (gcTaskThread.cpp:48), pid=6662, 
tid=0x00007f429a675700
#
---------------  T H R E A D  ---------------

Current thread (0x00007f4294019000):  JavaThread "Unknown thread" 
[_thread_in_vm, id=6696, stack(0x00007f429a575000,0x00007f429a676000)]

Stack: [0x00007f429a575000,0x00007f429a676000],  sp=0x00007f429a674550,  
free space=1021k
Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native 
code)


VM Arguments:
jvm_args: -Xmx1000m -Dhadoop.log.dir=/opt/cloudera/parcels/CDH-5.11.1-
1.cdh5.11.1.p0.4/lib/hadoop/logs -Dhadoop.log.file=hadoop.log -
Dhadoop.home.dir=/opt/cloudera/parcels/CDH-5.11.1-
1.cdh5.11.1.p0.4/lib/hadoop -Dhadoop.id.str= -
Dhadoop.root.logger=INFO,console -


Launcher Type: SUN_STANDARD

Environment Variables:
JAVA_HOME=/usr/java/jdk1.8.0_102


# JRE version:  (8.0_102-b14) (build )
# Java VM: Java HotSpot(TM) 64-Bit Server VM (25.102-b14 mixed mode linux-
amd64 compressed oops)
# Failed to write core dump. Core dumps have been disabled. To enable core 
dumping, try "ulimit -c unlimited" before starting Java again

Memory: 4k page, physical 24591972k(6051016k free), swap 12369916k(11359436k 
free)

我每 5 分钟 运行 在 Java 上运行 sqoop-import、sqoop-export 等程序。 示例:

#!/bin/bash

hadoop jar /home/progr/import_sqoop/oracle.jar.

CDH 版本 5.11.1

java版本jdk1.8.0_102

OS:Red Hat Enterprise Linux 服务器版本 6.9(圣地亚哥)

内存空闲:

             total       used       free     shared    buffers     cached
 Mem:      24591972   20080336    4511636     132036     334456    2825792
 -/+ buffers/cache:   16920088    7671884
Swap:     12369916    1008664   11361252

主机内存使用 enter image description here

最大堆内存(默认)限制为 1GB。你需要增加这个

JRE version: (8.0_102-b14) (build )
jvm_args: -Xmx1000m -Dhadoop.log.dir=/opt/cloudera/parcels/CDH-5.11.1- 1.cdh5.11.1.p0.4/lib/hadoop/logs -Dhadoop.log.file=hadoop.log - Dhadoop.home.dir=/opt/cloudera/parcels/CDH-5.11.1- 1.cdh5.11.1.p0.4/lib/hadoop -Dhadoop.id.str= - Dhadoop.root.logger=INFO,console -

尝试以下方法将其增加到 2048MB(或更高,如果需要)。

export HADOOP_CLIENT_OPTS="-Xmx2048m ${HADOOP_CLIENT_OPTS}"

参考: Pig: Hadoop jobs Fail
https://mail-archives.apache.org/mod_mbox/hadoop-mapreduce-user/201104.mbox/%3C5FFFF0E4-B3BA-420A-ADE3-B422A66E8B11@yahoo-inc.com%3E