Python 运行 函数并行
Python Run function in parallel
有没有办法通过 运行 并行加速?最长的处理时间是 scipy.ndimage.map_coordinates.
import multiprocessing
import numpy as np
import scipy.ndimage
pool = multiprocessing.Pool()
n=6
x0=350
y0=350
r=150
num=10000
#z = np.gradient(sensor_dat, axis=1)
z = np.random.randn(700,700)
def func1(i):
x1, y1 = x0 + r * np.cos(2 * np.pi * i / n), y0 + r * np.sin(2 * np.pi * i / n)
x, y = np.linspace(x0, x1, num), np.linspace(y0, y1, num)
zi = scipy.ndimage.map_coordinates(z, np.vstack((y, x)))
return zi
[func4(i) for i in range(36)]
#pool.map(func1,range(36))
我从 Is there a simple process-based parallel map for python? 开始尝试使用 pool.map(func1,range(36))
但出现错误 Can't pickle <function func1 at 0x0000019408E6F438>: attribute lookup func1 on __main__ failed
我找到了 但不认为这是相关的,因为 scipy.ndimage.map_coordinates 是大部分处理时间,但不认为它会加快我的情况。
是的,你可以。只需按照 multiprocessing documentation 中的说明进行操作,并衡量使用多个 worker 是否真的会更快。
这是我测试过的代码:
from multiprocessing import Pool
import numpy as np
from scipy import ndimage
from time import time
n=6
x0=350
y0=350
r=150
num=10000
#z = np.gradient(sensor_dat, axis=1)
z = np.random.randn(700,700)
def f(i):
x1, y1 = x0 + r * np.cos(2 * np.pi * i / n), y0 + r * np.sin(2 * np.pi * i / n)
x, y = np.linspace(x0, x1, num), np.linspace(y0, y1, num)
zi = ndimage.map_coordinates(z, np.vstack((y, x)))
return zi
if __name__ == '__main__':
begin = time()
[f(i) for i in range(36)]
end = time()
print('Single worker took {:.3f} secs'.format(end - begin))
begin = time()
with Pool() as p:
p.map(f, list(range(36)))
end = time()
print('Parallel workers took {:.3f} secs'.format(end - begin))
这会在我的机器上产生以下输出:
Single worker took 0.793 secs
Parallel workers took 0.217 secs
有没有办法通过 运行 并行加速?最长的处理时间是 scipy.ndimage.map_coordinates.
import multiprocessing
import numpy as np
import scipy.ndimage
pool = multiprocessing.Pool()
n=6
x0=350
y0=350
r=150
num=10000
#z = np.gradient(sensor_dat, axis=1)
z = np.random.randn(700,700)
def func1(i):
x1, y1 = x0 + r * np.cos(2 * np.pi * i / n), y0 + r * np.sin(2 * np.pi * i / n)
x, y = np.linspace(x0, x1, num), np.linspace(y0, y1, num)
zi = scipy.ndimage.map_coordinates(z, np.vstack((y, x)))
return zi
[func4(i) for i in range(36)]
#pool.map(func1,range(36))
我从 Is there a simple process-based parallel map for python? 开始尝试使用 pool.map(func1,range(36))
但出现错误 Can't pickle <function func1 at 0x0000019408E6F438>: attribute lookup func1 on __main__ failed
我找到了
是的,你可以。只需按照 multiprocessing documentation 中的说明进行操作,并衡量使用多个 worker 是否真的会更快。
这是我测试过的代码:
from multiprocessing import Pool
import numpy as np
from scipy import ndimage
from time import time
n=6
x0=350
y0=350
r=150
num=10000
#z = np.gradient(sensor_dat, axis=1)
z = np.random.randn(700,700)
def f(i):
x1, y1 = x0 + r * np.cos(2 * np.pi * i / n), y0 + r * np.sin(2 * np.pi * i / n)
x, y = np.linspace(x0, x1, num), np.linspace(y0, y1, num)
zi = ndimage.map_coordinates(z, np.vstack((y, x)))
return zi
if __name__ == '__main__':
begin = time()
[f(i) for i in range(36)]
end = time()
print('Single worker took {:.3f} secs'.format(end - begin))
begin = time()
with Pool() as p:
p.map(f, list(range(36)))
end = time()
print('Parallel workers took {:.3f} secs'.format(end - begin))
这会在我的机器上产生以下输出:
Single worker took 0.793 secs
Parallel workers took 0.217 secs