在多维数组产品中,如何对齐有和没有求和的轴?

In Multi-dimensional array products, how do I align axes with and without summation?

当有一些重复索引被求和而另一些不是时,执行数组操作的最佳方法是什么?看起来我可能不得不使用 einsum 来进行这些操作,但如果有一个 tensordot 替代方案,并且带有一个标志,用于对齐但不求和的维度,那会更好。

有谁知道一个快速的数值例程(也许在 lapack 中?)它的行为类似于 tensordot,除了一些轴可以对齐而不求和?

==

这是一个示例代码,用于显示所需的数组操作类型。我需要的操作由method_summethod_einsummethod_matmul完成。 method2_einsummethod2_tensordot 对匹配的 j 轴求和的类似操作。

通过比较时间,似乎 tensordot 在第一道题上应该可以击败 einsum。但是,它没有在不汇总轴的情况下对齐轴的功能。

#import scipy
import scipy as sp

# Shapes of arrays
I = 200
J = 50
K = 200
L = 100

a = sp.ones((I, J, L))
b = sp.ones((J, K, L))


# The desired product has a sum over the l-axis

## Use broadcasting to multiply and sum over the last dimension
def method_sum(a, b):
    "Multiply arrays and sum over last dimension."   
    c = (a[:, :, None, :] * b[None, :, :, :]).sum(-1)
    return c

## Use einsum to multiply arrays and sum over the l-axis
def method_einsum(a, b):
    "Multiply arrays and sum over last dimension."
    c = sp.einsum('ijl,jkl->ijk', a, b)
    return c

## Use matmul to multiply arrays and sum over one of the axes
def method_matmul(a, b):
    "Multiply arrays using the new matmul operation."
    c = sp.matmul(a[:, :, None, None, :], 
                  b[None, :, :, :, None])[:, :, :, 0, 0]
    return c


# Compare einsum vs tensordot on summation over j and l

## Einsum takes about the same amount of time when j is not summed over) 
def method2_einsum(a, b):
    "Multiply arrays and sum over last dimension."
    c = sp.einsum('ijl,jkl->ik', a, b)
    return c

## Tensor dot can do this faster but it always sums over the aligned axes
def method2_tensordot(a, b):
    "Multiply and sum over all overlapping dimensions."
    c = sp.tensordot(a, b, axes=[(1, 2,), (0, 2,)])
    return c

下面是我电脑上各种例程的一些时间。 Tensordot 可以在 method2 上击败 einsum,因为它使用多核。我想在 J 轴和 L 轴对齐但仅对 L 轴求和的情况下实现类似 tensordot 的性能。

Time for method_sum:
1 loops, best of 3: 744 ms per loop

Time for method_einsum:
10 loops, best of 3: 95.1 ms per loop

Time for method_matmul:
10 loops, best of 3: 93.8 ms per loop

Time for method2_einsum:
10 loops, best of 3: 90.4 ms per loop

Time for method2_tensordot:
100 loops, best of 3: 10.9 ms per loop
In [85]: I,J,K,L=2,3,4,5
In [86]: a=np.ones((I,J,L))
In [87]: b=np.ones((J,K,L))

In [88]: np.einsum('ijl,jkl->ijk',a,b).shape
Out[88]: (2, 3, 4)

使用新的 @ 运算符,我发现我可以生成:

In [91]: (a[:,:,None,None,:]@b[None,:,:,:,None]).shape
Out[91]: (2, 3, 4, 1, 1)

In [93]: (a[:,:,None,None,:]@b[None,:,:,:,None])[...,0,0].shape
Out[93]: (2, 3, 4)

形状是正确的,但我没有检查值。一些 None 排列在 ijk 轴上,两个产生常规的 dot 行为(最后一个轴与第二个轴到最后一个)。

对于你的尺码,时间差不多:

In [97]: np.einsum('ijl,jkl->ijk',a,b).shape
Out[97]: (200, 50, 200)
In [98]: (a[:,:,None,None,:]@b[None,:,:,:,None])[...,0,0].shape
Out[98]: (200, 50, 200)
In [99]: timeit np.einsum('ijl,jkl->ijk',a,b).shape
1 loop, best of 3: 2 s per loop
In [100]: timeit (a[:,:,None,None,:]@b[None,:,:,:,None])[...,0,0].shape
1 loop, best of 3: 2.06 s per loop