如何使用多处理调用一个函数 n 次
How to call a function n times with multiprocessing
我想用多处理调用我的函数 n 次(为了节省时间)并将结果保存在一个 numpy 数组中:
num=N # number of trials
results=np.zeros([N,2]) # array of results (2 because function produces 2 results)
def f(): #function with no arguments because the process is based on randomness
....
return a, b #results are float64 type
我想要这样的东西:
for i in range(num):
results[i]=f()
但使用多处理。有办法吗?
我试过了,但没用:
from multiprocessing import Pool
if __name__ == '__main__':
with Pool(15) as p:
for i in range(num):
result[i]=(p.map(f,iterable=i))
您可以通过调用 apply_async()
method, which belongs to the Pool
class, and by storing the AsyncResult
objects in a list. You also need to remember to invoke close()
and join()
方法来实现。所有过程完成后,您可以从 AsyncResult
个对象中收集结果。在下面的例子中,f()
函数总共会 运行 100 次,但是最多 4 个进程会同时 运行ning(不包括启动其他进程的进程).我相信代码可以进一步优化,但这可能是一个很好的起点。
import multiprocessing as mp
import numpy as np
def f():
# you perform your calculations here
result = 0, 0 # this is only for testing
return result
if __name__ == '__main__':
count = 100
async_results = []
with mp.Pool(processes=4) as pool:
for _ in range(count):
async_results.append(pool.apply_async(f))
pool.close()
pool.join()
results = np.zeros([count, 2])
for i, async_result in enumerate(async_results):
results[i] = async_result.get()
print(results)
我想用多处理调用我的函数 n 次(为了节省时间)并将结果保存在一个 numpy 数组中:
num=N # number of trials
results=np.zeros([N,2]) # array of results (2 because function produces 2 results)
def f(): #function with no arguments because the process is based on randomness
....
return a, b #results are float64 type
我想要这样的东西:
for i in range(num):
results[i]=f()
但使用多处理。有办法吗?
我试过了,但没用:
from multiprocessing import Pool
if __name__ == '__main__':
with Pool(15) as p:
for i in range(num):
result[i]=(p.map(f,iterable=i))
您可以通过调用 apply_async()
method, which belongs to the Pool
class, and by storing the AsyncResult
objects in a list. You also need to remember to invoke close()
and join()
方法来实现。所有过程完成后,您可以从 AsyncResult
个对象中收集结果。在下面的例子中,f()
函数总共会 运行 100 次,但是最多 4 个进程会同时 运行ning(不包括启动其他进程的进程).我相信代码可以进一步优化,但这可能是一个很好的起点。
import multiprocessing as mp
import numpy as np
def f():
# you perform your calculations here
result = 0, 0 # this is only for testing
return result
if __name__ == '__main__':
count = 100
async_results = []
with mp.Pool(processes=4) as pool:
for _ in range(count):
async_results.append(pool.apply_async(f))
pool.close()
pool.join()
results = np.zeros([count, 2])
for i, async_result in enumerate(async_results):
results[i] = async_result.get()
print(results)