PySpark df.toPandas() 抛出错误 "org.apache.spark.util.TaskCompletionListenerException: Memory was leaked by query. Memory leaked: (376832)"
PySpark df.toPandas() throws error "org.apache.spark.util.TaskCompletionListenerException: Memory was leaked by query. Memory leaked: (376832)"
使用 PySpark,我尝试使用以下方法将 spark DataFrame
转换为 pandas DataFrame
:
# Enable Arrow-based columnar data transfers
spark.conf.set("spark.sql.execution.arrow.enabled", "true")
data.toPandas()
我收到错误“org.apache.spark.util.TaskCompletionListenerException: Memory was leaked by query. Memory leaked: (376832)
”,但我不确定为什么会出现此错误。它甚至发生在只有 10 行的 data
的子集上。 运行 没有 spark.conf.set("spark.sql.execution.arrow.enabled", "true")
对我收到的错误没有影响。
完整错误:
---------------------------------------------------------------------------
Py4JJavaError Traceback (most recent call last)
In [20]:
Line 8: data.toPandas()
File c:\program files\arcgis\pro\Java\runtime\spark\python\lib\pyspark.zip\pyspark\sql\pandas\conversion.py, in toPandas:
Line 108: batches = self.toDF(*tmp_column_names)._collect_as_arrow()
File c:\program files\arcgis\pro\Java\runtime\spark\python\lib\pyspark.zip\pyspark\sql\pandas\conversion.py, in _collect_as_arrow:
Line 244: jsocket_auth_server.getResult()
File c:\program files\arcgis\pro\Java\runtime\spark\python\lib\py4j-0.10.9-src.zip\py4j\java_gateway.py, in __call__:
Line 1305: answer, self.gateway_client, self.target_id, self.name)
File c:\program files\arcgis\pro\Java\runtime\spark\python\lib\pyspark.zip\pyspark\sql\utils.py, in deco:
Line 128: return f(*a, **kw)
File c:\program files\arcgis\pro\Java\runtime\spark\python\lib\py4j-0.10.9-src.zip\py4j\protocol.py, in get_return_value:
Line 328: format(target_id, ".", name), value)
Py4JJavaError: An error occurred while calling o204.getResult.
: org.apache.spark.SparkException: Exception thrown in awaitResult:
at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:302)
at org.apache.spark.security.SocketAuthServer.getResult(SocketAuthServer.scala:88)
at org.apache.spark.security.SocketAuthServer.getResult(SocketAuthServer.scala:84)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.base/java.lang.reflect.Method.invoke(Unknown Source)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:238)
at java.base/java.lang.Thread.run(Unknown Source)
Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 26.0 failed 1 times, most recent failure: Lost task 0.0 in stage 26.0 (TID 26, <user>, executor driver): org.apache.spark.util.TaskCompletionListenerException: Memory was leaked by query. Memory leaked: (376832)
Allocator(toBatchIterator) 0/376832/376832/9223372036854775807 (res/actual/peak/limit)
Previous exception in task: sun.misc.Unsafe or java.nio.DirectByteBuffer.<init>(long, int) not available
io.netty.util.internal.PlatformDependent.directBuffer(PlatformDependent.java:490)
io.netty.buffer.NettyArrowBuf.getDirectBuffer(NettyArrowBuf.java:243)
io.netty.buffer.NettyArrowBuf.nioBuffer(NettyArrowBuf.java:233)
io.netty.buffer.ArrowBuf.nioBuffer(ArrowBuf.java:245)
org.apache.arrow.vector.ipc.message.ArrowRecordBatch.computeBodyLength(ArrowRecordBatch.java:222)
org.apache.arrow.vector.ipc.message.MessageSerializer.serialize(MessageSerializer.java:240)
org.apache.arrow.vector.ipc.message.MessageSerializer.serialize(MessageSerializer.java:226)
org.apache.spark.sql.execution.arrow.ArrowConverters$$anon.$anonfun$next(ArrowConverters.scala:118)
scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
org.apache.spark.sql.execution.arrow.ArrowConverters$$anon.next(ArrowConverters.scala:121)
org.apache.spark.sql.execution.arrow.ArrowConverters$$anon.next(ArrowConverters.scala:97)
scala.collection.Iterator.foreach(Iterator.scala:941)
scala.collection.Iterator.foreach$(Iterator.scala:941)
org.apache.spark.sql.execution.arrow.ArrowConverters$$anon.foreach(ArrowConverters.scala:97)
scala.collection.generic.Growable.$plus$plus$eq(Growable.scala:62)
scala.collection.generic.Growable.$plus$plus$eq$(Growable.scala:53)
scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:105)
scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:49)
scala.collection.TraversableOnce.to(TraversableOnce.scala:315)
scala.collection.TraversableOnce.to$(TraversableOnce.scala:313)
org.apache.spark.sql.execution.arrow.ArrowConverters$$anon.to(ArrowConverters.scala:97)
scala.collection.TraversableOnce.toBuffer(TraversableOnce.scala:307)
scala.collection.TraversableOnce.toBuffer$(TraversableOnce.scala:307)
org.apache.spark.sql.execution.arrow.ArrowConverters$$anon.toBuffer(ArrowConverters.scala:97)
scala.collection.TraversableOnce.toArray(TraversableOnce.scala:294)
scala.collection.TraversableOnce.toArray$(TraversableOnce.scala:288)
org.apache.spark.sql.execution.arrow.ArrowConverters$$anon.toArray(ArrowConverters.scala:97)
org.apache.spark.sql.Dataset.$anonfun$collectAsArrowToPython(Dataset.scala:3562)
org.apache.spark.SparkContext.$anonfun$runJob(SparkContext.scala:2193)
org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
org.apache.spark.scheduler.Task.run(Task.scala:127)
org.apache.spark.executor.Executor$TaskRunner.$anonfun$run(Executor.scala:446)
org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:449)
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
java.base/java.lang.Thread.run(Unknown Source)
at org.apache.spark.TaskContextImpl.invokeListeners(TaskContextImpl.scala:145)
at org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:124)
at org.apache.spark.scheduler.Task.run(Task.scala:137)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run(Executor.scala:446)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:449)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.base/java.lang.Thread.run(Unknown Source)
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2059)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage(DAGScheduler.scala:2008)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$adapted(DAGScheduler.scala:2007)
at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:2007)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed(DAGScheduler.scala:973)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$adapted(DAGScheduler.scala:973)
at scala.Option.foreach(Option.scala:407)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:973)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2239)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2188)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2177)
at org.apache.spark.util.EventLoop$$anon.run(EventLoop.scala:49)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:775)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2099)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2194)
at org.apache.spark.sql.Dataset.$anonfun$collectAsArrowToPython(Dataset.scala:3560)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
at org.apache.spark.sql.Dataset.$anonfun$collectAsArrowToPython(Dataset.scala:3564)
at org.apache.spark.sql.Dataset.$anonfun$collectAsArrowToPython$adapted(Dataset.scala:3541)
at org.apache.spark.sql.Dataset.$anonfun$withAction(Dataset.scala:3618)
at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId(SQLExecution.scala:100)
at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:160)
at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId(SQLExecution.scala:87)
at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:764)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64)
at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3616)
at org.apache.spark.sql.Dataset.$anonfun$collectAsArrowToPython(Dataset.scala:3541)
at org.apache.spark.sql.Dataset.$anonfun$collectAsArrowToPython$adapted(Dataset.scala:3540)
at org.apache.spark.security.SocketAuthServer$.$anonfun$serveToStream(SocketAuthServer.scala:130)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
at org.apache.spark.security.SocketAuthServer$.$anonfun$serveToStream(SocketAuthServer.scala:132)
at org.apache.spark.security.SocketAuthServer$.$anonfun$serveToStream$adapted(SocketAuthServer.scala:127)
at org.apache.spark.security.SocketFuncServer.handleConnection(SocketAuthServer.scala:104)
at org.apache.spark.security.SocketFuncServer.handleConnection(SocketAuthServer.scala:98)
at org.apache.spark.security.SocketAuthServer$$anon.$anonfun$run(SocketAuthServer.scala:60)
at scala.util.Try$.apply(Try.scala:213)
at org.apache.spark.security.SocketAuthServer$$anon.run(SocketAuthServer.scala:60)
Caused by: org.apache.spark.util.TaskCompletionListenerException: Memory was leaked by query. Memory leaked: (376832)
Allocator(toBatchIterator) 0/376832/376832/9223372036854775807 (res/actual/peak/limit)
Previous exception in task: sun.misc.Unsafe or java.nio.DirectByteBuffer.<init>(long, int) not available
io.netty.util.internal.PlatformDependent.directBuffer(PlatformDependent.java:490)
io.netty.buffer.NettyArrowBuf.getDirectBuffer(NettyArrowBuf.java:243)
io.netty.buffer.NettyArrowBuf.nioBuffer(NettyArrowBuf.java:233)
io.netty.buffer.ArrowBuf.nioBuffer(ArrowBuf.java:245)
org.apache.arrow.vector.ipc.message.ArrowRecordBatch.computeBodyLength(ArrowRecordBatch.java:222)
org.apache.arrow.vector.ipc.message.MessageSerializer.serialize(MessageSerializer.java:240)
org.apache.arrow.vector.ipc.message.MessageSerializer.serialize(MessageSerializer.java:226)
org.apache.spark.sql.execution.arrow.ArrowConverters$$anon.$anonfun$next(ArrowConverters.scala:118)
scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
org.apache.spark.sql.execution.arrow.ArrowConverters$$anon.next(ArrowConverters.scala:121)
org.apache.spark.sql.execution.arrow.ArrowConverters$$anon.next(ArrowConverters.scala:97)
scala.collection.Iterator.foreach(Iterator.scala:941)
scala.collection.Iterator.foreach$(Iterator.scala:941)
org.apache.spark.sql.execution.arrow.ArrowConverters$$anon.foreach(ArrowConverters.scala:97)
scala.collection.generic.Growable.$plus$plus$eq(Growable.scala:62)
scala.collection.generic.Growable.$plus$plus$eq$(Growable.scala:53)
scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:105)
scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:49)
scala.collection.TraversableOnce.to(TraversableOnce.scala:315)
scala.collection.TraversableOnce.to$(TraversableOnce.scala:313)
org.apache.spark.sql.execution.arrow.ArrowConverters$$anon.to(ArrowConverters.scala:97)
scala.collection.TraversableOnce.toBuffer(TraversableOnce.scala:307)
scala.collection.TraversableOnce.toBuffer$(TraversableOnce.scala:307)
org.apache.spark.sql.execution.arrow.ArrowConverters$$anon.toBuffer(ArrowConverters.scala:97)
scala.collection.TraversableOnce.toArray(TraversableOnce.scala:294)
scala.collection.TraversableOnce.toArray$(TraversableOnce.scala:288)
org.apache.spark.sql.execution.arrow.ArrowConverters$$anon.toArray(ArrowConverters.scala:97)
org.apache.spark.sql.Dataset.$anonfun$collectAsArrowToPython(Dataset.scala:3562)
org.apache.spark.SparkContext.$anonfun$runJob(SparkContext.scala:2193)
org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
org.apache.spark.scheduler.Task.run(Task.scala:127)
org.apache.spark.executor.Executor$TaskRunner.$anonfun$run(Executor.scala:446)
org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:449)
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
java.base/java.lang.Thread.run(Unknown Source)
at org.apache.spark.TaskContextImpl.invokeListeners(TaskContextImpl.scala:145)
at org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:124)
at org.apache.spark.scheduler.Task.run(Task.scala:137)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run(Executor.scala:446)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:449)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.base/java.lang.Thread.run(Unknown Source)
---------------------------------------------------------------------------
关于如何解决这个问题有任何提示或建议吗?
谢谢
原来我使用的旧版 Spark 是问题所在。升级 Spark 为我解决了这个问题。您可以使用 SPARK_HOME 环境变量来尝试不同的版本:
# 1. get spark-3.1.1-bin-hadoop2.7.tgz from https://archive.apache.org/dist/spark/spark-3.1.1/
# (You can get different version, this one worked for me, newer might be better for you - version with log4j fix might be available now)
# 2. open git bash, then:
# >> cd <spark-3.1.1-bin-hadoop2.7.tgz location>
# >> tar xzvf spark-3.1.1-bin-hadoop2.7.tgz
# 3. set system environment variable (used by spark_esri):
# SPARK_HOME: <path/to/spark-3.1.1-bin-hadoop2.7>
os.environ["SPARK_HOME"] = r"C:\spark-3.1.1-bin-hadoop2.7"
使用 PySpark,我尝试使用以下方法将 spark DataFrame
转换为 pandas DataFrame
:
# Enable Arrow-based columnar data transfers
spark.conf.set("spark.sql.execution.arrow.enabled", "true")
data.toPandas()
我收到错误“org.apache.spark.util.TaskCompletionListenerException: Memory was leaked by query. Memory leaked: (376832)
”,但我不确定为什么会出现此错误。它甚至发生在只有 10 行的 data
的子集上。 运行 没有 spark.conf.set("spark.sql.execution.arrow.enabled", "true")
对我收到的错误没有影响。
完整错误:
---------------------------------------------------------------------------
Py4JJavaError Traceback (most recent call last)
In [20]:
Line 8: data.toPandas()
File c:\program files\arcgis\pro\Java\runtime\spark\python\lib\pyspark.zip\pyspark\sql\pandas\conversion.py, in toPandas:
Line 108: batches = self.toDF(*tmp_column_names)._collect_as_arrow()
File c:\program files\arcgis\pro\Java\runtime\spark\python\lib\pyspark.zip\pyspark\sql\pandas\conversion.py, in _collect_as_arrow:
Line 244: jsocket_auth_server.getResult()
File c:\program files\arcgis\pro\Java\runtime\spark\python\lib\py4j-0.10.9-src.zip\py4j\java_gateway.py, in __call__:
Line 1305: answer, self.gateway_client, self.target_id, self.name)
File c:\program files\arcgis\pro\Java\runtime\spark\python\lib\pyspark.zip\pyspark\sql\utils.py, in deco:
Line 128: return f(*a, **kw)
File c:\program files\arcgis\pro\Java\runtime\spark\python\lib\py4j-0.10.9-src.zip\py4j\protocol.py, in get_return_value:
Line 328: format(target_id, ".", name), value)
Py4JJavaError: An error occurred while calling o204.getResult.
: org.apache.spark.SparkException: Exception thrown in awaitResult:
at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:302)
at org.apache.spark.security.SocketAuthServer.getResult(SocketAuthServer.scala:88)
at org.apache.spark.security.SocketAuthServer.getResult(SocketAuthServer.scala:84)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.base/java.lang.reflect.Method.invoke(Unknown Source)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:238)
at java.base/java.lang.Thread.run(Unknown Source)
Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 26.0 failed 1 times, most recent failure: Lost task 0.0 in stage 26.0 (TID 26, <user>, executor driver): org.apache.spark.util.TaskCompletionListenerException: Memory was leaked by query. Memory leaked: (376832)
Allocator(toBatchIterator) 0/376832/376832/9223372036854775807 (res/actual/peak/limit)
Previous exception in task: sun.misc.Unsafe or java.nio.DirectByteBuffer.<init>(long, int) not available
io.netty.util.internal.PlatformDependent.directBuffer(PlatformDependent.java:490)
io.netty.buffer.NettyArrowBuf.getDirectBuffer(NettyArrowBuf.java:243)
io.netty.buffer.NettyArrowBuf.nioBuffer(NettyArrowBuf.java:233)
io.netty.buffer.ArrowBuf.nioBuffer(ArrowBuf.java:245)
org.apache.arrow.vector.ipc.message.ArrowRecordBatch.computeBodyLength(ArrowRecordBatch.java:222)
org.apache.arrow.vector.ipc.message.MessageSerializer.serialize(MessageSerializer.java:240)
org.apache.arrow.vector.ipc.message.MessageSerializer.serialize(MessageSerializer.java:226)
org.apache.spark.sql.execution.arrow.ArrowConverters$$anon.$anonfun$next(ArrowConverters.scala:118)
scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
org.apache.spark.sql.execution.arrow.ArrowConverters$$anon.next(ArrowConverters.scala:121)
org.apache.spark.sql.execution.arrow.ArrowConverters$$anon.next(ArrowConverters.scala:97)
scala.collection.Iterator.foreach(Iterator.scala:941)
scala.collection.Iterator.foreach$(Iterator.scala:941)
org.apache.spark.sql.execution.arrow.ArrowConverters$$anon.foreach(ArrowConverters.scala:97)
scala.collection.generic.Growable.$plus$plus$eq(Growable.scala:62)
scala.collection.generic.Growable.$plus$plus$eq$(Growable.scala:53)
scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:105)
scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:49)
scala.collection.TraversableOnce.to(TraversableOnce.scala:315)
scala.collection.TraversableOnce.to$(TraversableOnce.scala:313)
org.apache.spark.sql.execution.arrow.ArrowConverters$$anon.to(ArrowConverters.scala:97)
scala.collection.TraversableOnce.toBuffer(TraversableOnce.scala:307)
scala.collection.TraversableOnce.toBuffer$(TraversableOnce.scala:307)
org.apache.spark.sql.execution.arrow.ArrowConverters$$anon.toBuffer(ArrowConverters.scala:97)
scala.collection.TraversableOnce.toArray(TraversableOnce.scala:294)
scala.collection.TraversableOnce.toArray$(TraversableOnce.scala:288)
org.apache.spark.sql.execution.arrow.ArrowConverters$$anon.toArray(ArrowConverters.scala:97)
org.apache.spark.sql.Dataset.$anonfun$collectAsArrowToPython(Dataset.scala:3562)
org.apache.spark.SparkContext.$anonfun$runJob(SparkContext.scala:2193)
org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
org.apache.spark.scheduler.Task.run(Task.scala:127)
org.apache.spark.executor.Executor$TaskRunner.$anonfun$run(Executor.scala:446)
org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:449)
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
java.base/java.lang.Thread.run(Unknown Source)
at org.apache.spark.TaskContextImpl.invokeListeners(TaskContextImpl.scala:145)
at org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:124)
at org.apache.spark.scheduler.Task.run(Task.scala:137)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run(Executor.scala:446)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:449)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.base/java.lang.Thread.run(Unknown Source)
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2059)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage(DAGScheduler.scala:2008)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$adapted(DAGScheduler.scala:2007)
at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:2007)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed(DAGScheduler.scala:973)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$adapted(DAGScheduler.scala:973)
at scala.Option.foreach(Option.scala:407)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:973)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2239)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2188)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2177)
at org.apache.spark.util.EventLoop$$anon.run(EventLoop.scala:49)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:775)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2099)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2194)
at org.apache.spark.sql.Dataset.$anonfun$collectAsArrowToPython(Dataset.scala:3560)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
at org.apache.spark.sql.Dataset.$anonfun$collectAsArrowToPython(Dataset.scala:3564)
at org.apache.spark.sql.Dataset.$anonfun$collectAsArrowToPython$adapted(Dataset.scala:3541)
at org.apache.spark.sql.Dataset.$anonfun$withAction(Dataset.scala:3618)
at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId(SQLExecution.scala:100)
at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:160)
at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId(SQLExecution.scala:87)
at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:764)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64)
at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3616)
at org.apache.spark.sql.Dataset.$anonfun$collectAsArrowToPython(Dataset.scala:3541)
at org.apache.spark.sql.Dataset.$anonfun$collectAsArrowToPython$adapted(Dataset.scala:3540)
at org.apache.spark.security.SocketAuthServer$.$anonfun$serveToStream(SocketAuthServer.scala:130)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
at org.apache.spark.security.SocketAuthServer$.$anonfun$serveToStream(SocketAuthServer.scala:132)
at org.apache.spark.security.SocketAuthServer$.$anonfun$serveToStream$adapted(SocketAuthServer.scala:127)
at org.apache.spark.security.SocketFuncServer.handleConnection(SocketAuthServer.scala:104)
at org.apache.spark.security.SocketFuncServer.handleConnection(SocketAuthServer.scala:98)
at org.apache.spark.security.SocketAuthServer$$anon.$anonfun$run(SocketAuthServer.scala:60)
at scala.util.Try$.apply(Try.scala:213)
at org.apache.spark.security.SocketAuthServer$$anon.run(SocketAuthServer.scala:60)
Caused by: org.apache.spark.util.TaskCompletionListenerException: Memory was leaked by query. Memory leaked: (376832)
Allocator(toBatchIterator) 0/376832/376832/9223372036854775807 (res/actual/peak/limit)
Previous exception in task: sun.misc.Unsafe or java.nio.DirectByteBuffer.<init>(long, int) not available
io.netty.util.internal.PlatformDependent.directBuffer(PlatformDependent.java:490)
io.netty.buffer.NettyArrowBuf.getDirectBuffer(NettyArrowBuf.java:243)
io.netty.buffer.NettyArrowBuf.nioBuffer(NettyArrowBuf.java:233)
io.netty.buffer.ArrowBuf.nioBuffer(ArrowBuf.java:245)
org.apache.arrow.vector.ipc.message.ArrowRecordBatch.computeBodyLength(ArrowRecordBatch.java:222)
org.apache.arrow.vector.ipc.message.MessageSerializer.serialize(MessageSerializer.java:240)
org.apache.arrow.vector.ipc.message.MessageSerializer.serialize(MessageSerializer.java:226)
org.apache.spark.sql.execution.arrow.ArrowConverters$$anon.$anonfun$next(ArrowConverters.scala:118)
scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
org.apache.spark.sql.execution.arrow.ArrowConverters$$anon.next(ArrowConverters.scala:121)
org.apache.spark.sql.execution.arrow.ArrowConverters$$anon.next(ArrowConverters.scala:97)
scala.collection.Iterator.foreach(Iterator.scala:941)
scala.collection.Iterator.foreach$(Iterator.scala:941)
org.apache.spark.sql.execution.arrow.ArrowConverters$$anon.foreach(ArrowConverters.scala:97)
scala.collection.generic.Growable.$plus$plus$eq(Growable.scala:62)
scala.collection.generic.Growable.$plus$plus$eq$(Growable.scala:53)
scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:105)
scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:49)
scala.collection.TraversableOnce.to(TraversableOnce.scala:315)
scala.collection.TraversableOnce.to$(TraversableOnce.scala:313)
org.apache.spark.sql.execution.arrow.ArrowConverters$$anon.to(ArrowConverters.scala:97)
scala.collection.TraversableOnce.toBuffer(TraversableOnce.scala:307)
scala.collection.TraversableOnce.toBuffer$(TraversableOnce.scala:307)
org.apache.spark.sql.execution.arrow.ArrowConverters$$anon.toBuffer(ArrowConverters.scala:97)
scala.collection.TraversableOnce.toArray(TraversableOnce.scala:294)
scala.collection.TraversableOnce.toArray$(TraversableOnce.scala:288)
org.apache.spark.sql.execution.arrow.ArrowConverters$$anon.toArray(ArrowConverters.scala:97)
org.apache.spark.sql.Dataset.$anonfun$collectAsArrowToPython(Dataset.scala:3562)
org.apache.spark.SparkContext.$anonfun$runJob(SparkContext.scala:2193)
org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
org.apache.spark.scheduler.Task.run(Task.scala:127)
org.apache.spark.executor.Executor$TaskRunner.$anonfun$run(Executor.scala:446)
org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:449)
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
java.base/java.lang.Thread.run(Unknown Source)
at org.apache.spark.TaskContextImpl.invokeListeners(TaskContextImpl.scala:145)
at org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:124)
at org.apache.spark.scheduler.Task.run(Task.scala:137)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run(Executor.scala:446)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:449)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.base/java.lang.Thread.run(Unknown Source)
---------------------------------------------------------------------------
关于如何解决这个问题有任何提示或建议吗?
谢谢
原来我使用的旧版 Spark 是问题所在。升级 Spark 为我解决了这个问题。您可以使用 SPARK_HOME 环境变量来尝试不同的版本:
# 1. get spark-3.1.1-bin-hadoop2.7.tgz from https://archive.apache.org/dist/spark/spark-3.1.1/
# (You can get different version, this one worked for me, newer might be better for you - version with log4j fix might be available now)
# 2. open git bash, then:
# >> cd <spark-3.1.1-bin-hadoop2.7.tgz location>
# >> tar xzvf spark-3.1.1-bin-hadoop2.7.tgz
# 3. set system environment variable (used by spark_esri):
# SPARK_HOME: <path/to/spark-3.1.1-bin-hadoop2.7>
os.environ["SPARK_HOME"] = r"C:\spark-3.1.1-bin-hadoop2.7"