Spark 在使用 ALS 训练时给出 StackOverflowError
Spark gives a StackOverflowError when training using ALS
尝试在 Spark 的 MLLib 中使用 ALS 训练机器学习模型时,我不断收到 WhosebugError。这是堆栈跟踪的一个小示例:
Traceback (most recent call last):
File "/Users/user/Spark/imf.py", line 31, in <module>
model = ALS.train(rdd, rank, numIterations)
File "/usr/local/Cellar/apache-spark/1.3.1_1/libexec/python/pyspark/mllib/recommendation.py", line 140, in train
lambda_, blocks, nonnegative, seed)
File "/usr/local/Cellar/apache-spark/1.3.1_1/libexec/python/pyspark/mllib/common.py", line 120, in callMLlibFunc
return callJavaFunc(sc, api, *args)
File "/usr/local/Cellar/apache-spark/1.3.1_1/libexec/python/pyspark/mllib/common.py", line 113, in callJavaFunc
return _java2py(sc, func(*args))
File "/usr/local/Cellar/apache-spark/1.3.1_1/libexec/python/lib/py4j-0.8.2.1-src.zip/py4j/java_gateway.py", line 538, in __call__
File "/usr/local/Cellar/apache-spark/1.3.1_1/libexec/python/lib/py4j-0.8.2.1-src.zip/py4j/protocol.py", line 300, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o35.trainALSModel.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 40.0 failed 1 times, most recent failure: Lost task 0.0 in stage 40.0 (TID 35, localhost): java.lang.WhosebugError
at java.io.ObjectInputStream$PeekInputStream.peek(ObjectInputStream.java:2296)
at java.io.ObjectInputStream$BlockDataInputStream.peek(ObjectInputStream.java:2589)
尝试 运行 .mean() 计算均方误差时也会出现此错误。它出现在 Spark 的 1.3.1_1 和 1.4.1 版本中。我使用的是 PySpark,增加可用内存并没有帮助。
解决方案是添加检查点,以防止代码库使用的递归产生溢出。首先,创建一个新目录来存储检查点。然后,让您的 SparkContext 使用该目录进行检查点。这是 Python 中的示例:
sc.setCheckpointDir('checkpoint/')
您可能还需要向 ALS 添加检查点,但我无法确定这是否会产生影响。要在那里添加检查点(可能没有必要),只需执行:
ALS.checkpointInterval = 2
尝试在 Spark 的 MLLib 中使用 ALS 训练机器学习模型时,我不断收到 WhosebugError。这是堆栈跟踪的一个小示例:
Traceback (most recent call last):
File "/Users/user/Spark/imf.py", line 31, in <module>
model = ALS.train(rdd, rank, numIterations)
File "/usr/local/Cellar/apache-spark/1.3.1_1/libexec/python/pyspark/mllib/recommendation.py", line 140, in train
lambda_, blocks, nonnegative, seed)
File "/usr/local/Cellar/apache-spark/1.3.1_1/libexec/python/pyspark/mllib/common.py", line 120, in callMLlibFunc
return callJavaFunc(sc, api, *args)
File "/usr/local/Cellar/apache-spark/1.3.1_1/libexec/python/pyspark/mllib/common.py", line 113, in callJavaFunc
return _java2py(sc, func(*args))
File "/usr/local/Cellar/apache-spark/1.3.1_1/libexec/python/lib/py4j-0.8.2.1-src.zip/py4j/java_gateway.py", line 538, in __call__
File "/usr/local/Cellar/apache-spark/1.3.1_1/libexec/python/lib/py4j-0.8.2.1-src.zip/py4j/protocol.py", line 300, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o35.trainALSModel.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 40.0 failed 1 times, most recent failure: Lost task 0.0 in stage 40.0 (TID 35, localhost): java.lang.WhosebugError
at java.io.ObjectInputStream$PeekInputStream.peek(ObjectInputStream.java:2296)
at java.io.ObjectInputStream$BlockDataInputStream.peek(ObjectInputStream.java:2589)
尝试 运行 .mean() 计算均方误差时也会出现此错误。它出现在 Spark 的 1.3.1_1 和 1.4.1 版本中。我使用的是 PySpark,增加可用内存并没有帮助。
解决方案是添加检查点,以防止代码库使用的递归产生溢出。首先,创建一个新目录来存储检查点。然后,让您的 SparkContext 使用该目录进行检查点。这是 Python 中的示例:
sc.setCheckpointDir('checkpoint/')
您可能还需要向 ALS 添加检查点,但我无法确定这是否会产生影响。要在那里添加检查点(可能没有必要),只需执行:
ALS.checkpointInterval = 2