使用multiprocessing.map时如何节省内存?

How to save memory when using multiprocessing.map?

我有一个函数,它基本上接受一对整数 (x,y) 并生成一个包含大约 3000 个元素的向量。所以,我使用了:

pool_obj=multiprocessing.Pool()
result=np.array(pool_obj.map(f, RANGE))

其中 RANGE 是 x,y 可能分别取的两组值的笛卡尔积。

我的问题是我只需要 np.sum(result,axis=0),它有 3000 长。我想对所有 x 和 y 求和。总共有 1000x1000 对 (x,y)。使用这种方法将创建一个 1000000x3000 大并超过内存限制的超大数组。

我该如何解决这个问题?

示例使用 x, y 对的生成器来减小输入大小,同时使用 imap 来减小输出大小(减少返回主进程的数据)

import multiprocessing as mp
import numpy as np
from time import sleep

class yield_xy:
    """
    Generator for x, y pairs prevents all pairs of x and y from being generated
    at the start of the map call. In this example it would only be a million
    floats, so on the order of 4-8 Mb of data, but if x, and y are bigger
    (or maybe you have a z) this could dramatically reduce input data size
    """
    def __init__(self, x, y):
        self._x = x
        self._y = y
        
    def __len__(self): #map, imap, map_async, starmap etc.. need the input size ahead of time
        return len(self._x) * len(self._y)
    
    def __iter__(self): #simple generator needs storage x + y rather than x * y
        for x in self._x:
            for y in self._y:
                yield x, y

def task(args):
    x, y = args
    return (np.zeros(3000) + x) * y


def main():
    x = np.arange(0,1000)
    y = np.sin(x)
    
    out = np.zeros(3000)
    
    with mp.Pool() as pool:
        for result in pool.imap(task, yield_xy(x, y)):
            out += result #accumulate results
    return out


if __name__ == "__main__":
    result = main()