How to resolve Spark java.lang.OutOfMemoryError: Java heap space while writing out in delta format?
How to resolve Spark java.lang.OutOfMemoryError: Java heap space while writing out in delta format?
我正在将 parquet 文件中的大约 4GB 数据加载到 Spark DF 中。加载需要几百毫秒。然后我将 DF 注册为 table 以执行 SQL 查询。
sparkDF = sqlContext.read.parquet("<path>/*.parquet")
sparkDF.registerTempTable("sparkDF")
其中一个 selective 查询在 select 列表中有 60 列给出了内存不足异常。
spark.sql("select <60 columns list> from sessions where endtime >= '2019-07-01 00:00:00' and endtime < '2019-07-01 03:00:00' and id = '<uuid>'").show()
[Stage 12:> (0 + 36) / 211]2019-09-16 21:18:45,583 ERROR executor.Executor: Exception in task 25.0 in stage 12.0 (TID 1608)
java.lang.OutOfMemoryError: Java heap space
当我从 select 列表中删除一些列时,它正在成功执行。我试图将 spark.executor.memory 和 spark.driver.memory 增加到大约 16g。但问题无法解决。
然后我把spark版本更新到最新的2.4.4。现在不再报错了。
但是当我以增量格式编写相同的 DF 时,使用相同的更新版本,我得到了相同的内存不足错误。
sessions.write.format("delta").save("/usr/spark-2.4.4/data/data-delta/")
[Stage 5:> (0 + 36) / 37]2019-09-18 18:58:04,362 ERROR executor.Executor: Exception in task 21.0 in stage 5.0 (TID 109)
java.lang.OutOfMemoryError: Java heap space
at org.apache.hadoop.io.compress.DecompressorStream.<init>(DecompressorStream.java:64)
at org.apache.hadoop.io.compress.DecompressorStream.<init>(DecompressorStream.java:71)
at org.apache.parquet.hadoop.codec.NonBlockedDecompressorStream.<init>(NonBlockedDecompressorStream.java:36)
at org.apache.parquet.hadoop.codec.SnappyCodec.createInputStream(SnappyCodec.java:75)
at org.apache.parquet.hadoop.CodecFactory$HeapBytesDecompressor.decompress(CodecFactory.java:109)
at org.apache.parquet.hadoop.ColumnChunkPageReadStore$ColumnChunkPageReader.visit(ColumnChunkPageReadStore.java:93)
at org.apache.parquet.hadoop.ColumnChunkPageReadStore$ColumnChunkPageReader.visit(ColumnChunkPageReadStore.java:88)
at org.apache.parquet.column.page.DataPageV1.accept(DataPageV1.java:95)
at org.apache.parquet.hadoop.ColumnChunkPageReadStore$ColumnChunkPageReader.readPage(ColumnChunkPageReadStore.java:88)
at org.apache.parquet.column.impl.ColumnReaderImpl.readPage(ColumnReaderImpl.java:532)
at org.apache.parquet.column.impl.ColumnReaderImpl.checkRead(ColumnReaderImpl.java:525)
at org.apache.parquet.column.impl.ColumnReaderImpl.consume(ColumnReaderImpl.java:638)
at org.apache.parquet.column.impl.ColumnReaderImpl.<init>(ColumnReaderImpl.java:353)
at org.apache.parquet.column.impl.ColumnReadStoreImpl.newMemColumnReader(ColumnReadStoreImpl.java:80)
at org.apache.parquet.column.impl.ColumnReadStoreImpl.getColumnReader(ColumnReadStoreImpl.java:75)
at org.apache.parquet.io.RecordReaderImplementation.<init>(RecordReaderImplementation.java:271)
at org.apache.parquet.io.MessageColumnIO.visit(MessageColumnIO.java:147)
at org.apache.parquet.io.MessageColumnIO.visit(MessageColumnIO.java:109)
at org.apache.parquet.filter2.compat.FilterCompat$NoOpFilter.accept(FilterCompat.java:165)
at org.apache.parquet.io.MessageColumnIO.getRecordReader(MessageColumnIO.java:109)
at org.apache.parquet.hadoop.InternalParquetRecordReader.checkRead(InternalParquetRecordReader.java:137)
at org.apache.parquet.hadoop.InternalParquetRecordReader.nextKeyValue(InternalParquetRecordReader.java:222)
at org.apache.parquet.hadoop.ParquetRecordReader.nextKeyValue(ParquetRecordReader.java:207)
at org.apache.spark.sql.execution.datasources.RecordReaderIterator.hasNext(RecordReaderIterator.scala:39)
at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon.hasNext(FileScanRDD.scala:101)
at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon.nextIterator(FileScanRDD.scala:181)
at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon.hasNext(FileScanRDD.scala:101)
at scala.collection.Iterator$$anon.hasNext(Iterator.scala:409)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:232)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write.apply(FileFormatWriter.scala:170)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write.apply(FileFormatWriter.scala:169)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
任何更好的suggestions/improvements将有助于解决问题。
您可以增加 VM 可以使用的 RAM 量。 VM 选项为:
-Xms: sets the minimum memory usage. Syntax: -Xms2048m (2 GB of memory)
-Xmx: sets the maximum memory usage. Syntax: -Xmx2048m
我不确定这是否能解决您的问题,但您应该试一试。
使用 Spark 2.4.4 版本,仅在运行时增加驱动程序内存,有助于解决问题。
pyspark --packages io.delta:delta-core_2.11:0.3.0 --driver-memory 5g
增加驱动程序和执行程序内存的解决方案是非常临时的solution.Also它的全部内容parallelism.You驱动程序不需要 16GB 内存。
而不是 spark.sql("select <60 columns list> from sessions where endtime >= '2019-07-01 00:00:00' and endtime < '2019-07-01 03:00:00' and id = ''").show() 你应该
使用 spark.sql("select * from sessions where endtime >= '2019-07-01 00:00:00' and endtime < '2019-07-01 03:00:00' and id = ''").show(60)
我正在将 parquet 文件中的大约 4GB 数据加载到 Spark DF 中。加载需要几百毫秒。然后我将 DF 注册为 table 以执行 SQL 查询。
sparkDF = sqlContext.read.parquet("<path>/*.parquet")
sparkDF.registerTempTable("sparkDF")
其中一个 selective 查询在 select 列表中有 60 列给出了内存不足异常。
spark.sql("select <60 columns list> from sessions where endtime >= '2019-07-01 00:00:00' and endtime < '2019-07-01 03:00:00' and id = '<uuid>'").show()
[Stage 12:> (0 + 36) / 211]2019-09-16 21:18:45,583 ERROR executor.Executor: Exception in task 25.0 in stage 12.0 (TID 1608)
java.lang.OutOfMemoryError: Java heap space
当我从 select 列表中删除一些列时,它正在成功执行。我试图将 spark.executor.memory 和 spark.driver.memory 增加到大约 16g。但问题无法解决。
然后我把spark版本更新到最新的2.4.4。现在不再报错了。
但是当我以增量格式编写相同的 DF 时,使用相同的更新版本,我得到了相同的内存不足错误。
sessions.write.format("delta").save("/usr/spark-2.4.4/data/data-delta/")
[Stage 5:> (0 + 36) / 37]2019-09-18 18:58:04,362 ERROR executor.Executor: Exception in task 21.0 in stage 5.0 (TID 109)
java.lang.OutOfMemoryError: Java heap space
at org.apache.hadoop.io.compress.DecompressorStream.<init>(DecompressorStream.java:64)
at org.apache.hadoop.io.compress.DecompressorStream.<init>(DecompressorStream.java:71)
at org.apache.parquet.hadoop.codec.NonBlockedDecompressorStream.<init>(NonBlockedDecompressorStream.java:36)
at org.apache.parquet.hadoop.codec.SnappyCodec.createInputStream(SnappyCodec.java:75)
at org.apache.parquet.hadoop.CodecFactory$HeapBytesDecompressor.decompress(CodecFactory.java:109)
at org.apache.parquet.hadoop.ColumnChunkPageReadStore$ColumnChunkPageReader.visit(ColumnChunkPageReadStore.java:93)
at org.apache.parquet.hadoop.ColumnChunkPageReadStore$ColumnChunkPageReader.visit(ColumnChunkPageReadStore.java:88)
at org.apache.parquet.column.page.DataPageV1.accept(DataPageV1.java:95)
at org.apache.parquet.hadoop.ColumnChunkPageReadStore$ColumnChunkPageReader.readPage(ColumnChunkPageReadStore.java:88)
at org.apache.parquet.column.impl.ColumnReaderImpl.readPage(ColumnReaderImpl.java:532)
at org.apache.parquet.column.impl.ColumnReaderImpl.checkRead(ColumnReaderImpl.java:525)
at org.apache.parquet.column.impl.ColumnReaderImpl.consume(ColumnReaderImpl.java:638)
at org.apache.parquet.column.impl.ColumnReaderImpl.<init>(ColumnReaderImpl.java:353)
at org.apache.parquet.column.impl.ColumnReadStoreImpl.newMemColumnReader(ColumnReadStoreImpl.java:80)
at org.apache.parquet.column.impl.ColumnReadStoreImpl.getColumnReader(ColumnReadStoreImpl.java:75)
at org.apache.parquet.io.RecordReaderImplementation.<init>(RecordReaderImplementation.java:271)
at org.apache.parquet.io.MessageColumnIO.visit(MessageColumnIO.java:147)
at org.apache.parquet.io.MessageColumnIO.visit(MessageColumnIO.java:109)
at org.apache.parquet.filter2.compat.FilterCompat$NoOpFilter.accept(FilterCompat.java:165)
at org.apache.parquet.io.MessageColumnIO.getRecordReader(MessageColumnIO.java:109)
at org.apache.parquet.hadoop.InternalParquetRecordReader.checkRead(InternalParquetRecordReader.java:137)
at org.apache.parquet.hadoop.InternalParquetRecordReader.nextKeyValue(InternalParquetRecordReader.java:222)
at org.apache.parquet.hadoop.ParquetRecordReader.nextKeyValue(ParquetRecordReader.java:207)
at org.apache.spark.sql.execution.datasources.RecordReaderIterator.hasNext(RecordReaderIterator.scala:39)
at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon.hasNext(FileScanRDD.scala:101)
at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon.nextIterator(FileScanRDD.scala:181)
at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon.hasNext(FileScanRDD.scala:101)
at scala.collection.Iterator$$anon.hasNext(Iterator.scala:409)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:232)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write.apply(FileFormatWriter.scala:170)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write.apply(FileFormatWriter.scala:169)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
任何更好的suggestions/improvements将有助于解决问题。
您可以增加 VM 可以使用的 RAM 量。 VM 选项为:
-Xms: sets the minimum memory usage. Syntax: -Xms2048m (2 GB of memory)
-Xmx: sets the maximum memory usage. Syntax: -Xmx2048m
我不确定这是否能解决您的问题,但您应该试一试。
使用 Spark 2.4.4 版本,仅在运行时增加驱动程序内存,有助于解决问题。
pyspark --packages io.delta:delta-core_2.11:0.3.0 --driver-memory 5g
增加驱动程序和执行程序内存的解决方案是非常临时的solution.Also它的全部内容parallelism.You驱动程序不需要 16GB 内存。
而不是 spark.sql("select <60 columns list> from sessions where endtime >= '2019-07-01 00:00:00' and endtime < '2019-07-01 03:00:00' and id = ''").show() 你应该 使用 spark.sql("select * from sessions where endtime >= '2019-07-01 00:00:00' and endtime < '2019-07-01 03:00:00' and id = ''").show(60)