Pyspark count() 和 collect() 不起作用

Pyspark count() and collect() do not work

我对自己的情况很困惑。我在 pyspark 中找到序列模式。 一开始我有这样的键值 RDD

p_split.take(2)

[(['A', 'B', 'C', 'D'], u'749'),
 (['O', 'K', 'A'], u'162')]

然后我找到了字符串组合并加入它们:

def patterns1(text):
    output = [list(combinations(text, i)) for i in range(len(text) + 1)]
    output = output[2:-1]
    paths = []
    for item in output:
        for i in range(len(item)):
            paths.append('->'.join(item[i]))
    return paths


p_patterns = p_split.map(lambda (x,y): (patterns1(x), y))

p_patterns.take(2)

 [(['A->B',
   'A->C'
   'A->D',
   'B->C',
   'B->D',
   ...
  u'749'), .....

使用这个 RDD p_patterns 我不能做像 count() 和 collect() 这样的操作。使用 p_split 我成功地完成了这个操作。

p_patterns.count()

    ---------------------------------------------------------------------------
Py4JJavaError                             Traceback (most recent call last)
<ipython-input-14-75eb19776fa7> in <module>()
----> 1 p_patterns.count()

/usr/local/bin/spark-1.3.1-bin-hadoop2.6/python/pyspark/rdd.py in count(self)
    930         3
    931         """
--> 932         return self.mapPartitions(lambda i: [sum(1 for _ in i)]).sum()
    933 
    934     def stats(self):

/usr/local/bin/spark-1.3.1-bin-hadoop2.6/python/pyspark/rdd.py in sum(self)
    921         6.0
    922         """
--> 923         return self.mapPartitions(lambda x: [sum(x)]).reduce(operator.add)
    924 
    925     def count(self):

/usr/local/bin/spark-1.3.1-bin-hadoop2.6/python/pyspark/rdd.py in reduce(self, f)
    737             yield reduce(f, iterator, initial)
    738 
--> 739         vals = self.mapPartitions(func).collect()
    740         if vals:
    741             return reduce(f, vals)

/usr/local/bin/spark-1.3.1-bin-hadoop2.6/python/pyspark/rdd.py in collect(self)
    711         """
    712         with SCCallSiteSync(self.context) as css:
--> 713             port = self.ctx._jvm.PythonRDD.collectAndServe(self._jrdd.rdd())
    714         return list(_load_from_socket(port, self._jrdd_deserializer))
    715 

/usr/local/bin/spark-1.3.1-bin-hadoop2.6/python/lib/py4j-0.8.2.1-src.zip/py4j/java_gateway.py in __call__(self, *args)
    536         answer = self.gateway_client.send_command(command)
    537         return_value = get_return_value(answer, self.gateway_client,
--> 538                 self.target_id, self.name)
    539 
    540         for temp_arg in temp_args:

/usr/local/bin/spark-1.3.1-bin-hadoop2.6/python/lib/py4j-0.8.2.1-src.zip/py4j/protocol.py in get_return_value(answer, gateway_client, target_id, name)
    298                 raise Py4JJavaError(
    299                     'An error occurred while calling {0}{1}{2}.\n'.
--> 300                     format(target_id, '.', name), value)
    301             else:
    302                 raise Py4JError(

Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 8.0 failed 1 times, most recent failure: Lost task 0.0 in stage 8.0 (TID 8, localhost): org.apache.spark.api.python.PythonException: Traceback (most recent call last):
  File "/usr/local/bin/spark-1.3.1-bin-hadoop2.6/python/pyspark/worker.py", line 101, in main
    process()
  File "/usr/local/bin/spark-1.3.1-bin-hadoop2.6/python/pyspark/worker.py", line 96, in process
    serializer.dump_stream(func(split_index, iterator), outfile)
  File "/usr/local/bin/spark-1.3.1-bin-hadoop2.6/python/pyspark/rdd.py", line 2252, in pipeline_func
    return func(split, prev_func(split, iterator))
  File "/usr/local/bin/spark-1.3.1-bin-hadoop2.6/python/pyspark/rdd.py", line 2252, in pipeline_func
    return func(split, prev_func(split, iterator))
  File "/usr/local/bin/spark-1.3.1-bin-hadoop2.6/python/pyspark/rdd.py", line 2252, in pipeline_func
    return func(split, prev_func(split, iterator))
  File "/usr/local/bin/spark-1.3.1-bin-hadoop2.6/python/pyspark/rdd.py", line 282, in func
    return f(iterator)
  File "/usr/local/bin/spark-1.3.1-bin-hadoop2.6/python/pyspark/rdd.py", line 932, in <lambda>
    return self.mapPartitions(lambda i: [sum(1 for _ in i)]).sum()
  File "/usr/local/bin/spark-1.3.1-bin-hadoop2.6/python/pyspark/rdd.py", line 932, in <genexpr>
    return self.mapPartitions(lambda i: [sum(1 for _ in i)]).sum()
  File "<ipython-input-12-0e1339e78f5c>", line 1, in <lambda>
  File "<ipython-input-11-b71a29b24fa7>", line 7, in patterns1
MemoryError

    at org.apache.spark.api.python.PythonRDD$$anon.read(PythonRDD.scala:135)
    at org.apache.spark.api.python.PythonRDD$$anon.<init>(PythonRDD.scala:176)
    at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:94)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:61)
    at org.apache.spark.scheduler.Task.run(Task.scala:64)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:203)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)

Driver stacktrace:
    at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1204)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage.apply(DAGScheduler.scala:1193)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage.apply(DAGScheduler.scala:1192)
    at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
    at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1192)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed.apply(DAGScheduler.scala:693)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed.apply(DAGScheduler.scala:693)
    at scala.Option.foreach(Option.scala:236)
    at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:693)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1393)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1354)
    at org.apache.spark.util.EventLoop$$anon.run(EventLoop.scala:48)

我的错误是什么?

据我所知,ipython 出现了 MemoryError。同时你的 p_patterns.take(2) 工作,这意味着你的 RDD 是好的。

那么,能不能这么简单,只需要在使用之前缓存你的RDD?喜欢

p_patterns = p_split.map(lambda (x,y): (patterns1(x), y)).cache()

正如@lanenok 所指出的那样,这是一个内存错误,考虑到 patterns1 函数内部发生的事情,这并不奇怪。以下语句的内存复杂度:

o = [list(combinations(text, i)) for i in range(len(text) + 1)]

大致为 O(2^N),其中 N 是输入文本的长度。

这背后还隐藏着第二个问题。它不会让事情变得比指数复杂度更糟,但它本身就很糟糕。当您将 combinations 转换为列表时,您失去了惰性序列的所有好处,可以利用它进一步推动内存复杂性设置的限制。

我会建议尽可能使用生成器和惰性函数(toolz rocks here)。我已经提到过这种方法 所以请看一看。例如 pattern1 可以重写为:

from itertools import combinations
from toolz.itertoolz import concat, map

def patterns1(text): 
    return map(
        lambda x: '->'.join(x), 
        concat(combinations(text, i) for i in range(2, len(text) + 1)))

显然它不会解决内存复杂性问题,但它是如何优化算法的起点。