spark shuffle memory error: failed to allocate direct memory

spark shuffle memory error: failed to allocate direct memory

在 spark 数据帧上执行几个连接 (4x) 时出现以下错误:

org.apache.spark.shuffle.FetchFailedException: failed to allocate 16777216 byte(s) of direct memory (used: 4294967296, max: 4294967296)

即使设置:

--conf "spark.executor.extraJavaOptions-XX:MaxDirectMemorySize=4G" \

未解决

飞行街区似乎太多了。尝试使用较小的 spark.reducer.maxBlocksInFlightPerAddress 值。作为参考,看看这个 JIRA

引用文字:

For configurations with external shuffle enabled, we have observed that if a very large no. of blocks are being fetched from a remote host, it puts the NM under extra pressure and can crash it. This change introduces a configuration spark.reducer.maxBlocksInFlightPerAddress , to limit the no. of map outputs being fetched from a given remote address. The changes applied here are applicable for both the scenarios - when external shuffle is enabled as well as disabled.