什么问题会导致带有 ConvolutionND 的 CuDNNError

What problems can lead to a CuDNNError with ConvolutionND

我在我的链中使用三维卷积链接(使用 ConvolutionND)。

正向计算运行顺利(我检查了中间结果形状以确保我正确理解convolution_nd参数的含义),但在向后计算CuDNNError是引发消息 CUDNN_STATUS_NOT_SUPPORTED

ConvolutionND 的 cover_all 参数默认值为 False,因此从文档中我看不出错误的原因。

以下是我如何定义其中一个卷积层:

self.conv1 = chainer.links.ConvolutionND(3, 1, 4, (3, 3, 3)).to_gpu(self.GPU_1_ID)

调用栈是

File "chainer/function_node.py", line 548, in backward_accumulate
    gxs = self.backward(target_input_indexes, grad_outputs)
File "chainer/functions/connection/convolution_nd.py", line 118, in backward
    gy, W, stride=self.stride, pad=self.pad, outsize=x_shape)
File "chainer/functions/connection/deconvolution_nd.py", line 310, in deconvolution_nd
    y, = func.apply(args)
File chainer/function_node.py", line 258, in apply
    outputs = self.forward(in_data)
File "chainer/functions/connection/deconvolution_nd.py", line 128, in forward
    return self._forward_cudnn(x, W, b)
File "chainer/functions/connection/deconvolution_nd.py", line 105, in _forward_cudnn
    tensor_core=tensor_core)
File "cupy/cudnn.pyx", line 881, in cupy.cudnn.convolution_backward_data
File "cupy/cuda/cudnn.pyx", line 975, in cupy.cuda.cudnn.convolutionBackwardData_v3
File "cupy/cuda/cudnn.pyx", line 461, in cupy.cuda.cudnn.check_status
cupy.cuda.cudnn.CuDNNError: CUDNN_STATUS_NOT_SUPPORTED

那么在使用ConvolutionND时有什么特别需要注意的地方吗?

失败代码例如:

import chainer
from chainer import functions as F
from chainer import links as L
from chainer.backends import cuda

import numpy as np
import cupy as cp

chainer.global_config.cudnn_deterministic = False

NB_MASKS = 60
NB_FCN = 3
NB_CLASS = 17

class MFEChain(chainer.Chain):
    """docstring for Wavelphasenet."""
    def __init__(self,
                 FCN_Dim,
                 gpu_ids=None):
        super(MFEChain, self).__init__()

        self.GPU_0_ID, self.GPU_1_ID = (0, 1) if gpu_ids is None else gpu_ids
        with self.init_scope():
            self.conv1 = chainer.links.ConvolutionND(3, 1, 4, (3, 3, 3)).to_gpu(
                self.GPU_1_ID
            )

    def __call__(self, inputs):
        ### Pad input ###
        processed_sequences = []
        for convolved in inputs:
            ## Transform to sequences)
            copy = convolved if self.GPU_0_ID == self.GPU_1_ID else F.copy(convolved, self.GPU_1_ID)
            processed_sequences.append(copy)

        reprocessed_sequences = []
        with cuda.get_device(self.GPU_1_ID):
            for convolved in processed_sequences:
                convolved = F.expand_dims(convolved, 0)
                convolved = F.expand_dims(convolved, 0)
                convolved = self.conv1(convolved)

                reprocessed_sequences.append(convolved)

            states = F.vstack(reprocessed_sequences)

            logits = states

            ret_logits = logits if self.GPU_0_ID == self.GPU_1_ID else F.copy(logits, self.GPU_0_ID)
        return ret_logits

def mfe_test():
    mfe = MFEChain(150)
    inputs = list(
        chainer.Variable(
            cp.random.randn(
                NB_MASKS,
                11,
                in_len,
                dtype=cp.float32
            )
        ) for in_len in [53248]
    )
    val = mfe(inputs)
    grad = cp.ones(val.shape, dtype=cp.float32)
    val.grad = grad
    val.backward()
    for i in inputs:
        print(i.grad)

if __name__ == "__main__":
    mfe_test()

cupy.cuda.cudnn.convolutionBackwardData_v3 与某些特定参数不兼容,如 an issue in official github.

中所述

不幸的是,这个问题只处理了deconvolution_2d.py(不是deconvolution_nd.py),所以我猜你的情况下是否使用cudnn的决策失败了。

您可以通过确认

来检查您的参数
  1. 检查是否将膨胀参数(!=1)或组参数(!=1)传递给了卷积。
  2. 打印 chainer.config.cudnn_deterministic、configuration.config.autotune 和 configuration.config.use_cudnn_tensor_core。

可以通过在官方中提出问题来获得进一步的支持github。

您显示的代码非常复杂。

为了澄清问题,下面的代码会有所帮助。

from chainer import Variable, Chain
from chainer import links as L
from chainer import functions as F

import numpy as np
from six import print_

batch_size = 1
in_channel = 1
out_channel = 1

class MyLink(Chain):
    def __init__(self):
        super(MyLink, self).__init__()
        with self.init_scope():
            self.conv = L.ConvolutionND(3, 1, 1, (3, 3, 3), nobias=True, initialW=np.ones((in_channel, out_channel, 3, 3, 3)))

    def __call__(self, x):
        return F.sum(self.conv(x))

if __name__ == "__main__":
    my_link = MyLink()
    my_link.to_gpu(0)
    batch = Variable(np.ones((batch_size, in_channel, 3, 3, 3)))
    batch.to_gpu(0)
    loss = my_link(batch)
    loss.backward()
    print_(batch.grad)