加速 Kronecker 产品 Numpy

Speeding Up Kronecker Products Numpy

所以我正在尝试计算两个任意维度矩阵的克罗内克积。 (为了示例,我使用了相同维度的方阵)

最初我尝试使用 kron :

a = np.random.random((60,60))
b = np.random.random((60,60))

start = time.time()
a = np.kron(a,b)
end = time.time()

Output: 0.160096406936645

为了尝试加快速度,我使用了 tensordot:

a = np.random.random((60,60))
b = np.random.random((60,60))

start = time.time()
a = np.tensordot(a,b,axes=0)
a = np.transpose(a,(0,2,1,3))
a = np.reshape(a,(3600,3600))
end = time.time()

Output: 0.11808371543884277

在网上稍作搜索后,我发现(或至少据我所知)numpy 在必须重塑已转置的张量时会生成一个额外的副本。

于是我尝试了以下(这段代码显然没有给出 a 和 b 的克罗内克积,但我只是作为测试这样做的):

a = np.random.random((60,60))
b = np.random.random((60,60))

start = time.time()
a = np.tensordot(a,b,axes=0)
a = np.reshape(a,(3600,3600))
end = time.time()

Output: 0.052041053771972656

我的问题是:如何在不遇到与转置相关的问题的情况下计算克罗内克积?

我只是在寻找快速加速,所以解决方案不必使用 tensordot

编辑

我刚刚在这个堆栈上发现 post:speeding up numpy kronecker products,还有另一种方法可以做到这一点:

a = np.random.random((60,60))
b = np.random.random((60,60))

c = a

start = time.time()
a = a[:,np.newaxis,:,np.newaxis]
a = a[:,np.newaxis,:,np.newaxis]*b[np.newaxis,:,np.newaxis,:]
a.shape = (3600,3600)
end = time.time()

test = np.kron(c,b)
print(np.array_equal(a,test))
print(end-start)


Output: True
0.05503702163696289

我仍然对您是否可以进一步加快此计算的问题感兴趣?

einsum 似乎有效:

>>> a = np.random.random((60,60))
>>> b = np.random.random((60,60))
>>> ab = np.kron(a,b)
>>> abe = np.einsum('ik,jl', a, b).reshape(3600,3600)
>>> (abe==ab).all()
True
>>> timeit(lambda: np.kron(a, b), number=10)
1.0697475590277463
>>> timeit(lambda: np.einsum('ik,jl', a, b).reshape(3600,3600), number=10)
0.42500176999601535

简单广播更快:

>>> abb = (a[:, None, :, None]*b[None, :, None, :]).reshape(3600,3600)
>>> (abb==ab).all()
True
>>> timeit(lambda:  (a[:, None, :, None]*b[None, :, None, :]).reshape(3600,3600), number=10)
0.28011218502069823

更新:使用 blas 和 cython 我们可以获得另一个适度的 (30%) 加速。自己决定是否值得麻烦。

[setup.py]

from distutils.core import setup
from Cython.Build import cythonize

setup(name='kronecker',
      ext_modules=cythonize("cythkrn.pyx"))

[cythkrn.pyx]

import cython
cimport scipy.linalg.cython_blas as blas
import numpy as np

@cython.boundscheck(False)
@cython.wraparound(False)
def kron(double[:, ::1] a, double[:, ::1] b):
    cdef int i = a.shape[0]
    cdef int j = a.shape[1]
    cdef int k = b.shape[0]
    cdef int l = b.shape[1]
    cdef int onei = 1
    cdef double oned = 1
    cdef int m, n
    result = np.zeros((i*k, j*l), float)
    cdef double[:, ::1] result_v = result
    for n in range(i):
        for m in range(k):
            blas.dger(&l, &j, &oned, &b[m, 0], &onei, &a[n, 0], &onei, &result_v[m+k*n, 0], &l)
    return result

构建运行cython cythkrn.pyx然后python3 setup.py build.

>>> from timeit import timeit
>>> import cythkrn
>>> import numpy as np
>>> 
>>> a = np.random.random((60,60))
>>> b = np.random.random((60,60))
>>>
>>> np.all(cythkrn.kron(a, b)==np.kron(a, b))
True
>>> 
>>> timeit(lambda: cythkrn.kron(a, b), number=10)
0.18925874299020506

加速内存绑定计算

  • 完全避免它是可能的(例如 kron_and_sum 示例)
  • Blocked execution,结合其他计算时
  • 也许 float32 代替 float64 也足够了
  • 如果这个计算是循环的,只分配一次内存

我在这段代码和@Paul Panzers 实现中得到了完全相同的时间,但在这两种实现中我得到了相同的奇怪行为。使用预分配内存,如果计算是并行的(这是预期的),则绝对不会加速,但如果没有预分配内存,则速度会显着提高。

代码

import numba as nb
import numpy as np


@nb.njit(fastmath=True,parallel=True)
def kron(A,B):
    out=np.empty((A.shape[0],B.shape[0],A.shape[1],B.shape[1]),dtype=A.dtype)
    for i in nb.prange(A.shape[0]):
        for j in range(B.shape[0]):
            for k in range(A.shape[1]):
                for l in range(B.shape[1]):
                    out[i,j,k,l]=A[i,k]*B[j,l]
    return out

@nb.njit(fastmath=True,parallel=False)
def kron_preallocated(A,B,out):
    for i in nb.prange(A.shape[0]):
        for j in range(B.shape[0]):
            for k in range(A.shape[1]):
                for l in range(B.shape[1]):
                    out[i,j,k,l]=A[i,k]*B[j,l]
    return out

@nb.njit(fastmath=True,parallel=True)
def kron_and_sum(A,B):
    out=0.
    for i in nb.prange(A.shape[0]):
        TMP=np.float32(0.)
        for j in range(B.shape[0]):
            for k in range(A.shape[1]):
                for l in range(B.shape[1]):
                    out+=A[i,k]*B[j,l]
    return out

时间

#Create some data
out_float64=np.empty((a.shape[0],b.shape[0],a.shape[1],b.shape[1]),dtype=np.float64)
out_float32=np.empty((a.shape[0],b.shape[0],a.shape[1],b.shape[1]),dtype=np.float32)
a_float64 = np.random.random((60,60))
b_float64 = np.random.random((60,60))
a_float32 = a_float64.astype(np.float32)
b_float32 = b_float64.astype(np.float32)


#Reference
%timeit np.kron(a_float64,b_float64)
147 ms ± 1.22 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)

#If you have to allocate memory for every calculation (float64)
%timeit B=kron(a_float64,b_float64).reshape(3600,3600)
17.6 ms ± 244 µs per loop (mean ± std. dev. of 7 runs, 1 loop each)

#If you don't have to allocate memory for every calculation (float64)
%timeit B=kron_preallocated(a_float64,b_float64,out_float64).reshape(3600,3600)
8.08 ms ± 269 µs per loop (mean ± std. dev. of 7 runs, 1 loop each)

#If you have to allocate memory for every calculation (float32)
%timeit B=kron(a_float32,b_float32).reshape(3600,3600)
9.27 ms ± 185 µs per loop (mean ± std. dev. of 7 runs, 1 loop each)

#If you don't have to allocate memory for every calculation (float32)
%timeit B=kron_preallocated(a_float32,b_float32,out_float32).reshape(3600,3600)
3.95 ms ± 155 µs per loop (mean ± std. dev. of 7 runs, 1 loop each)

#Example for a joined operation (sum of kroncker product)
#which isn't memory bottlenecked
%timeit B=kron_and_sum(a_float64,b_float64)
881 µs ± 104 µs per loop (mean ± std. dev. of 7 runs, 1 loop each)

我在这里稍微改进了 np.kron (57%):https://github.com/numpy/numpy/pull/21232

我们的想法是摆脱 concatenate,这恰好导致 ValueError

无论如何,自从我偶然发现这一点以来,@Paul Panzer 似乎有一个更好的解决方案,即在他的 which I'm trying to add to NumPy now. You can keep a look out at https://github.com/numpy/numpy/issues/21257 中使用广播来取得进展。感谢@Paul 的想法。