我将如何使用 Dask 对 NumPy 数组的切片执行并行操作?

How would I use Dask to perform parallel operations on slices of NumPy arrays?

我有一个大小为 n_slice x 2048 x 3 的 numpy 坐标数组,其中 n_slice 以万为单位。我想分别对每个 2048 x 3 切片应用以下操作

import numpy as np
from scipy.spatial.distance import pdist

# load coor from a binary xyz file, dcd format

n_slice, n_coor, _ = coor.shape
r = np.arange(n_coor)
dist = np.zeros([n_slice, n_coor, n_coor])

# this loop is what I want to parallelize, each slice is completely independent
for i in xrange(n_slice): 
    dist[i, r[:, None] < r] = pdist(coor[i])

我尝试通过将 coor 设为 dask.array,

来使用 Dask
import dask.array as da
dcoor = da.from_array(coor, chunks=(1, 2048, 3))

但简单地将 coor 替换为 dcoor 不会暴露并行性。我可以看到为每个切片设置 运行 的并行线程,但我如何利用 Dask 来处理并行性?

这是使用concurrent.futures

的并行实现
import concurrent.futures
import multiprocessing

n_cpu = multiprocessing.cpu_count()

def get_dist(coor, dist, r):
    dist[r[:, None] < r] = pdist(coor)

# load coor from a binary xyz file, dcd format

n_slice, n_coor, _ = coor.shape
r = np.arange(n_coor)
dist = np.zeros([n_slice, n_coor, n_coor])

with concurrent.futures.ThreadPoolExecutor(max_workers=n_cpu) as executor:
    for i in xrange(n_slice):
        executor.submit(get_dist, cool[i], dist[i], r)

这个问题可能不太适合 Dask,因为没有块间计算。

map_blocks

map_blocks 方法可能会有帮助:

dcoor.map_blocks(pdist)

不均匀数组

您似乎在进行一些花哨的切片操作,以将特定值插入到输出数组的特定位置。使用 dask.arrays 可能会很尴尬。相反,我建议制作一个生成 numpy 数组的函数

def myfunc(chunk):
    values = pdist(chunk[0, :, :])
    output = np.zeroes((2048, 2048))
    r = np.arange(2048)
    output[r[:, None] < r] = values
    return output

dcoor.map_blocks(myfunc)

delayed

你总是可以使用最坏的情况dask.delayed

from dask import delayed, compute
coor2 = delayed(coor)
slices = [coor2[i] for i in range(coor.shape[0])]
slices2 = [delayed(pdist)(slice) for slice in slices]
results = compute(*slices2)