tomcat7上的OutOfMemoryError
OutOfMemoryError on tomcat7
我正在开发一个 web 应用程序,它采用用户上传的 zip 文件,在服务器上解压缩,然后处理文件。当 zip 文件不是太大 (20-25MB) 时,它就像一个魅力,但如果文件大约或超过 (50MB),它会产生 OutOfMemoryError。
我试图通过添加来增加 java 最大内存分配池
export CATALINA_OPTS="-Xmx1024M"
到 startup.sh in tomcat7,但错误仍然存在。
据我所知,问题在于解压缩 .zip 文件。 top
显示 tomcat 在提取 50MB 文件期间使用了 800MB 内存。是否有任何解决方案,以启用高达 ~200MB 的上传,同时有效地使用可用内存?
解压代码如下:
package user;
import java.io.BufferedInputStream;
import java.io.FileInputStream;
import java.io.FileNotFoundException;
import java.io.FileOutputStream;
import java.io.IOException;
import java.util.zip.ZipEntry;
import java.util.zip.ZipInputStream;
public class unzip {
public void unzipFile(String filePath, String oPath)
{
FileInputStream fis = null;
ZipInputStream zipIs = null;
ZipEntry zEntry = null;
try {
fis = new FileInputStream(filePath);
zipIs = new ZipInputStream(new BufferedInputStream(fis));
while((zEntry = zipIs.getNextEntry()) != null){
try{
byte[] tmp = new byte[8*1024];
FileOutputStream fos = null;
String opFilePath = oPath+zEntry.getName();
System.out.println("Extracting file to "+opFilePath);
fos = new FileOutputStream(opFilePath);
int size = 0;
while((size = zipIs.read(tmp)) != -1){
fos.write(tmp, 0 , size);
}
fos.flush();
fos.close();
}catch(Exception ex){
}
}
zipIs.close();
fis.close();
} catch (FileNotFoundException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
错误代码如下:
HTTP Status 500 - javax.servlet.ServletException: java.lang.OutOfMemoryError: Java heap space
type Exception report
message javax.servlet.ServletException: java.lang.OutOfMemoryError: Java heap space
description The server encountered an internal error that prevented it from fulfilling this request.
exception
org.apache.jasper.JasperException: javax.servlet.ServletException: java.lang.OutOfMemoryError: Java heap space
org.apache.jasper.servlet.JspServletWrapper.handleJspException(JspServletWrapper.java:549)
org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:455)
org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:390)
org.apache.jasper.servlet.JspServlet.service(JspServlet.java:334)
javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
root cause
javax.servlet.ServletException: java.lang.OutOfMemoryError: Java heap space
org.apache.jasper.runtime.PageContextImpl.doHandlePageException(PageContextImpl.java:916)
org.apache.jasper.runtime.PageContextImpl.handlePageException(PageContextImpl.java:845)
org.apache.jsp.Upload_jsp._jspService(Upload_jsp.java:369)
org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:70)
javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:432)
org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:390)
org.apache.jasper.servlet.JspServlet.service(JspServlet.java:334)
javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
root cause
java.lang.OutOfMemoryError: Java heap space
org.apache.commons.io.output.ByteArrayOutputStream.toByteArray(ByteArrayOutputStream.java:322)
org.apache.commons.io.output.DeferredFileOutputStream.getData(DeferredFileOutputStream.java:213)
org.apache.commons.fileupload.disk.DiskFileItem.getSize(DiskFileItem.java:289)
org.apache.jsp.Upload_jsp._jspService(Upload_jsp.java:159)
org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:70)
javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:432)
org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:390)
org.apache.jasper.servlet.JspServlet.service(JspServlet.java:334)
javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
note The full stack trace of the root cause is available in the Apache Tomcat/7.0.52 (Ubuntu) logs.
Apache Tomcat/7.0.52 (Ubuntu)
令人惊讶的是,catalina.out 文件中没有关于此异常的任何内容。
提前致谢。
编辑 Upload.jsp
中 DiskFileItem 的代码
//necessary imports go here
File file ;
int maxFileSize = 1000 * 1000 * 1024;
int maxMemSize = 1000 * 1024;
ServletContext context = pageContext.getServletContext();
String filePath = context.getInitParameter("file-upload");
String contentType = request.getContentType();
if(contentType != null)
{
if ((contentType.indexOf("multipart/form-data") >= 0))
{
DiskFileItemFactory factory = new DiskFileItemFactory();
factory.setSizeThreshold(maxMemSize);
factory.setRepository(new File("/tmp/"));
ServletFileUpload upload = new ServletFileUpload(factory);
upload.setSizeMax( maxFileSize );
try{
List fileItems = upload.parseRequest(request);
Iterator i = fileItems.iterator();
while (i.hasNext ())
{
FileItem fi = (FileItem)i.next();
if ( !fi.isFormField () )
{
String fieldName = fi.getFieldName();
String fileName = fi.getName();
if(fileName.endsWith(".zip")||fileName.endsWith(".pdf")||fileName.endsWith(".doc")||fileName.endsWith(".docx")||fileName.endsWith(".ppt")||fileName.endsWith(".pptx")||fileName.endsWith(".html")||fileName.endsWith(".htm")||fileName.endsWith(".epub")||fileName.endsWith(".djvu"))
{
boolean isInMemory = fi.isInMemory();
long sizeInBytes = fi.getSize();
new File(filePath+fileName).mkdir();
filePath = filePath+fileName+"/";
file = new File( filePath + fileName.substring( fileName.lastIndexOf("/"))) ;
fi.write(file);
String fileExtension = FilenameUtils.getExtension(fileName);
if(fileExtension.equals("zip"))
{
System.out.println("In zip.");
unzip mfe = new unzip();
mfe.unzipFile(filePath+fileName,filePath);
File zip = new File(filePath+fileName);
zip.delete();
}
File corePath = new File(filePath);
int count=0;
//some more processing
}
}
}
}
catch(Exception e)
{
//exception handling goes here
}
}
}
为每个 zip 条目分配 8MB 似乎只是试探性的方法。尝试使用较小的缓冲区,比如不超过 1kb。垃圾收集不会持续发生。
尝试使用这种方法:
int BUFFER_SIZE = 1024;
int size;
byte[] buffer = new byte[BUFFER_SIZE];
...
FileOutputStream out = new FileOutputStream(path, false);
BufferedOutputStream fout = new BufferedOutputStream(out, BUFFER_SIZE);
while ( (size = zin.read(buffer, 0, BUFFER_SIZE)) != -1 ) {
fout.write(buffer, 0, size);
}
您的 while
循环似乎创建了太多内存。
检查它发生的次数来决定。
下面这行主要是原因:
byte[] tmp = new byte[8*1024];
您可以尝试将 1024 减少到 10 之类的值,看看它是否仍然发生。
还要检查文件大小。
问题不在您 post 的解压缩代码中。根本原因在:
java.lang.OutOfMemoryError: Java heap space
org.apache.commons.io.output.ByteArrayOutputStream.toByteArray(ByteArrayOutputStream.java:322)
org.apache.commons.io.output.DeferredFileOutputStream.getData(DeferredFileOutputStream.java:213)
org.apache.commons.fileupload.disk.DiskFileItem.getSize(DiskFileItem.java:289)
你注意到 ByteArrayOutputStream.toByteArray
了吗?所以看起来你正在写一个 ByteArrayOutputStream
增长太多。请找到并 post 使用这个 ByteArrayOutputStream
的代码,因为您的邮政编码不使用这样的东西
更新:
从您 post 编辑的代码来看,您的代码似乎没有问题。但是 FileItem.getSize()
调用做了一些令人讨厌的事情:
283 public long getSize() {
284 if (size >= 0) {
285 return size;
286 } else if (cachedContent != null) {
287 return cachedContent.length;
288 } else if (dfos.isInMemory()) {
289 return dfos.getData().length;
290 } else {
291 return dfos.getFile().length();
292 }
293 }
如果文件项的数据存储在内存中 - 它调用 getData()
调用 toByteArray()
209 public byte[] [More ...] getData()
210 {
211 if (memoryOutputStream != null)
212 {
213 return memoryOutputStream.toByteArray();
214 }
215 return null;
216 }
依次分配一个新数组:
317 public synchronized byte[] toByteArray() {
318 int remaining = count;
319 if (remaining == 0) {
320 return EMPTY_BYTE_ARRAY;
321 }
322 byte newbuf[] = new byte[remaining];
//Do stuff
333 return newbuf;
334 }
所以在短时间内你的内存消耗是正常的两倍。
我建议您:
将 maxMemSize
设置为不再有 8-32 Kb
为 JVM 进程提供更多内存:-Xmx2g
例如
确保您没有持有对 FileItem
的任何不必要的引用,因为在您当前的配置中它们会消耗大量内存。
如果 OOM 再次发生,请执行 heapdump。您可以使用 -XX:+HeapDumpOnOutOfMemoryError
JVM 标志自动为您创建堆转储。然后你可以使用堆转储分析器(例如 Eclipse MAT)来检查谁分配了这么多内存以及分配到哪里。
问题是当用户上传 zip 文件时,整个 zip 文件被读取到内存中,从堆栈跟踪中调用
时抛出错误
DiskFileItem.getSize()
来自DiskFileItem的源代码,
DiskFileItem.getSize() 首先获取所有数据,
public long getSize() {
284 if (size >= 0) {
285 return size;
286 } else if (cachedContent != null) {
287 return cachedContent.length;
288 } else if (dfos.isInMemory()) {
289 return dfos.getData().length;
290 } else {
291 return dfos.getFile().length();
292 }
293 }
By looking at documentation of DeferredFileOutputStream.getDate()
Returns either the output file specified in the constructor or the temporary file created or null.
If the constructor specifying the file is used then it returns that same output file, even when threashold has not been reached.
If constructor specifying a temporary file prefix/suffix is used then the temporary file created once the threashold is reached is returned If the threshold was not reached then null is returned.
Returns:
The file for this output stream, or null if no such file exists.
理想情况下,不应允许用户上传任何大小的文件,根据您的服务器容量,应该有最大大小限制。
我正在开发一个 web 应用程序,它采用用户上传的 zip 文件,在服务器上解压缩,然后处理文件。当 zip 文件不是太大 (20-25MB) 时,它就像一个魅力,但如果文件大约或超过 (50MB),它会产生 OutOfMemoryError。
我试图通过添加来增加 java 最大内存分配池
export CATALINA_OPTS="-Xmx1024M"
到 startup.sh in tomcat7,但错误仍然存在。
据我所知,问题在于解压缩 .zip 文件。 top
显示 tomcat 在提取 50MB 文件期间使用了 800MB 内存。是否有任何解决方案,以启用高达 ~200MB 的上传,同时有效地使用可用内存?
解压代码如下:
package user;
import java.io.BufferedInputStream;
import java.io.FileInputStream;
import java.io.FileNotFoundException;
import java.io.FileOutputStream;
import java.io.IOException;
import java.util.zip.ZipEntry;
import java.util.zip.ZipInputStream;
public class unzip {
public void unzipFile(String filePath, String oPath)
{
FileInputStream fis = null;
ZipInputStream zipIs = null;
ZipEntry zEntry = null;
try {
fis = new FileInputStream(filePath);
zipIs = new ZipInputStream(new BufferedInputStream(fis));
while((zEntry = zipIs.getNextEntry()) != null){
try{
byte[] tmp = new byte[8*1024];
FileOutputStream fos = null;
String opFilePath = oPath+zEntry.getName();
System.out.println("Extracting file to "+opFilePath);
fos = new FileOutputStream(opFilePath);
int size = 0;
while((size = zipIs.read(tmp)) != -1){
fos.write(tmp, 0 , size);
}
fos.flush();
fos.close();
}catch(Exception ex){
}
}
zipIs.close();
fis.close();
} catch (FileNotFoundException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
错误代码如下:
HTTP Status 500 - javax.servlet.ServletException: java.lang.OutOfMemoryError: Java heap space
type Exception report
message javax.servlet.ServletException: java.lang.OutOfMemoryError: Java heap space
description The server encountered an internal error that prevented it from fulfilling this request.
exception
org.apache.jasper.JasperException: javax.servlet.ServletException: java.lang.OutOfMemoryError: Java heap space
org.apache.jasper.servlet.JspServletWrapper.handleJspException(JspServletWrapper.java:549)
org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:455)
org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:390)
org.apache.jasper.servlet.JspServlet.service(JspServlet.java:334)
javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
root cause
javax.servlet.ServletException: java.lang.OutOfMemoryError: Java heap space
org.apache.jasper.runtime.PageContextImpl.doHandlePageException(PageContextImpl.java:916)
org.apache.jasper.runtime.PageContextImpl.handlePageException(PageContextImpl.java:845)
org.apache.jsp.Upload_jsp._jspService(Upload_jsp.java:369)
org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:70)
javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:432)
org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:390)
org.apache.jasper.servlet.JspServlet.service(JspServlet.java:334)
javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
root cause
java.lang.OutOfMemoryError: Java heap space
org.apache.commons.io.output.ByteArrayOutputStream.toByteArray(ByteArrayOutputStream.java:322)
org.apache.commons.io.output.DeferredFileOutputStream.getData(DeferredFileOutputStream.java:213)
org.apache.commons.fileupload.disk.DiskFileItem.getSize(DiskFileItem.java:289)
org.apache.jsp.Upload_jsp._jspService(Upload_jsp.java:159)
org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:70)
javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:432)
org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:390)
org.apache.jasper.servlet.JspServlet.service(JspServlet.java:334)
javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
note The full stack trace of the root cause is available in the Apache Tomcat/7.0.52 (Ubuntu) logs.
Apache Tomcat/7.0.52 (Ubuntu)
令人惊讶的是,catalina.out 文件中没有关于此异常的任何内容。
提前致谢。
编辑 Upload.jsp
中 DiskFileItem 的代码//necessary imports go here
File file ;
int maxFileSize = 1000 * 1000 * 1024;
int maxMemSize = 1000 * 1024;
ServletContext context = pageContext.getServletContext();
String filePath = context.getInitParameter("file-upload");
String contentType = request.getContentType();
if(contentType != null)
{
if ((contentType.indexOf("multipart/form-data") >= 0))
{
DiskFileItemFactory factory = new DiskFileItemFactory();
factory.setSizeThreshold(maxMemSize);
factory.setRepository(new File("/tmp/"));
ServletFileUpload upload = new ServletFileUpload(factory);
upload.setSizeMax( maxFileSize );
try{
List fileItems = upload.parseRequest(request);
Iterator i = fileItems.iterator();
while (i.hasNext ())
{
FileItem fi = (FileItem)i.next();
if ( !fi.isFormField () )
{
String fieldName = fi.getFieldName();
String fileName = fi.getName();
if(fileName.endsWith(".zip")||fileName.endsWith(".pdf")||fileName.endsWith(".doc")||fileName.endsWith(".docx")||fileName.endsWith(".ppt")||fileName.endsWith(".pptx")||fileName.endsWith(".html")||fileName.endsWith(".htm")||fileName.endsWith(".epub")||fileName.endsWith(".djvu"))
{
boolean isInMemory = fi.isInMemory();
long sizeInBytes = fi.getSize();
new File(filePath+fileName).mkdir();
filePath = filePath+fileName+"/";
file = new File( filePath + fileName.substring( fileName.lastIndexOf("/"))) ;
fi.write(file);
String fileExtension = FilenameUtils.getExtension(fileName);
if(fileExtension.equals("zip"))
{
System.out.println("In zip.");
unzip mfe = new unzip();
mfe.unzipFile(filePath+fileName,filePath);
File zip = new File(filePath+fileName);
zip.delete();
}
File corePath = new File(filePath);
int count=0;
//some more processing
}
}
}
}
catch(Exception e)
{
//exception handling goes here
}
}
}
为每个 zip 条目分配 8MB 似乎只是试探性的方法。尝试使用较小的缓冲区,比如不超过 1kb。垃圾收集不会持续发生。
尝试使用这种方法:
int BUFFER_SIZE = 1024;
int size;
byte[] buffer = new byte[BUFFER_SIZE];
...
FileOutputStream out = new FileOutputStream(path, false);
BufferedOutputStream fout = new BufferedOutputStream(out, BUFFER_SIZE);
while ( (size = zin.read(buffer, 0, BUFFER_SIZE)) != -1 ) {
fout.write(buffer, 0, size);
}
您的 while
循环似乎创建了太多内存。
检查它发生的次数来决定。
下面这行主要是原因:
byte[] tmp = new byte[8*1024];
您可以尝试将 1024 减少到 10 之类的值,看看它是否仍然发生。
还要检查文件大小。
问题不在您 post 的解压缩代码中。根本原因在:
java.lang.OutOfMemoryError: Java heap space
org.apache.commons.io.output.ByteArrayOutputStream.toByteArray(ByteArrayOutputStream.java:322)
org.apache.commons.io.output.DeferredFileOutputStream.getData(DeferredFileOutputStream.java:213)
org.apache.commons.fileupload.disk.DiskFileItem.getSize(DiskFileItem.java:289)
你注意到 ByteArrayOutputStream.toByteArray
了吗?所以看起来你正在写一个 ByteArrayOutputStream
增长太多。请找到并 post 使用这个 ByteArrayOutputStream
的代码,因为您的邮政编码不使用这样的东西
更新:
从您 post 编辑的代码来看,您的代码似乎没有问题。但是 FileItem.getSize()
调用做了一些令人讨厌的事情:
283 public long getSize() {
284 if (size >= 0) {
285 return size;
286 } else if (cachedContent != null) {
287 return cachedContent.length;
288 } else if (dfos.isInMemory()) {
289 return dfos.getData().length;
290 } else {
291 return dfos.getFile().length();
292 }
293 }
如果文件项的数据存储在内存中 - 它调用 getData()
调用 toByteArray()
209 public byte[] [More ...] getData()
210 {
211 if (memoryOutputStream != null)
212 {
213 return memoryOutputStream.toByteArray();
214 }
215 return null;
216 }
依次分配一个新数组:
317 public synchronized byte[] toByteArray() {
318 int remaining = count;
319 if (remaining == 0) {
320 return EMPTY_BYTE_ARRAY;
321 }
322 byte newbuf[] = new byte[remaining];
//Do stuff
333 return newbuf;
334 }
所以在短时间内你的内存消耗是正常的两倍。
我建议您:
将
maxMemSize
设置为不再有 8-32 Kb为 JVM 进程提供更多内存:
-Xmx2g
例如确保您没有持有对
FileItem
的任何不必要的引用,因为在您当前的配置中它们会消耗大量内存。如果 OOM 再次发生,请执行 heapdump。您可以使用
-XX:+HeapDumpOnOutOfMemoryError
JVM 标志自动为您创建堆转储。然后你可以使用堆转储分析器(例如 Eclipse MAT)来检查谁分配了这么多内存以及分配到哪里。
问题是当用户上传 zip 文件时,整个 zip 文件被读取到内存中,从堆栈跟踪中调用
时抛出错误DiskFileItem.getSize()
来自DiskFileItem的源代码, DiskFileItem.getSize() 首先获取所有数据,
public long getSize() {
284 if (size >= 0) {
285 return size;
286 } else if (cachedContent != null) {
287 return cachedContent.length;
288 } else if (dfos.isInMemory()) {
289 return dfos.getData().length;
290 } else {
291 return dfos.getFile().length();
292 }
293 }
By looking at documentation of DeferredFileOutputStream.getDate()
Returns either the output file specified in the constructor or the temporary file created or null.
If the constructor specifying the file is used then it returns that same output file, even when threashold has not been reached.
If constructor specifying a temporary file prefix/suffix is used then the temporary file created once the threashold is reached is returned If the threshold was not reached then null is returned.
Returns:
The file for this output stream, or null if no such file exists.
理想情况下,不应允许用户上传任何大小的文件,根据您的服务器容量,应该有最大大小限制。