期待 nn.Sum 的连续张量错误
Expecting a contiguous tensor error with nn.Sum
我有一个 2x16x3x10x10 张量输入我的网络。我的网络有两个并行工作的部分。第一部分采用 16x3x10x10 矩阵并计算最后两个维度的总和,返回 16x3 张量。
第二部分是产生 16x160 张量的卷积神经网络。
每当我尝试 运行 这个模型时,我都会收到以下错误:
...903/nTorch/Torch7/install/share/lua/5.1/torch/Tensor.lua:457: expecting a contiguous tensor
stack traceback:
[C]: in function 'assert'
...903/nTorch/Torch7/install/share/lua/5.1/torch/Tensor.lua:457: in function 'view'
...8/osu7903/nTorch/Torch7/install/share/lua/5.1/nn/Sum.lua:26: in function 'updateGradInput'
...03/nTorch/Torch7/install/share/lua/5.1/nn/Sequential.lua:40: in function 'updateGradInput'
...7903/nTorch/Torch7/install/share/lua/5.1/nn/Parallel.lua:52: in function 'updateGradInput'
...su7903/nTorch/Torch7/install/share/lua/5.1/nn/Module.lua:30: in function 'backward'
...03/nTorch/Torch7/install/share/lua/5.1/nn/Sequential.lua:73: in function 'backward'
./train_v2_with_batch.lua:144: in function 'opfunc'
...su7903/nTorch/Torch7/install/share/lua/5.1/optim/sgd.lua:43: in function 'sgd'
./train_v2_with_batch.lua:160: in function 'train'
run.lua:93: in main chunk
[C]: in function 'dofile'
...rch/Torch7/install/lib/luarocks/rocks/trepl/scm-1/bin/th:131: in main chunk
[C]: at 0x00405800
这是模型的相关部分:
local first_part = nn.Parallel(1,2)
local CNN = nn.Sequential()
local sums = nn.Sequential()
sums:add(nn.Sum(3))
sums:add(nn.Sum(3))
first_part:add(sums)
-- stage 1: conv+max
CNN:add(nn.SpatialConvolutionMM(nfeats, convDepth_L1,receptiveFieldWidth_L1,receptiveFieldHeight_L1))
-- Since the default stride of the receptive field is 1, then
-- (assuming receptiveFieldWidth_L1 = receptiveFieldHeight_L1 = 3) the number of receptive fields is (10-3+1)x(10-3+1) or 8x8
-- so the output volume is (convDepth_L1 X 8 X 8) or 10 x 8 x 8
--CNN:add(nn.Threshold())
CNN:add(nn.ReLU())
CNN:add(nn.SpatialMaxPooling(poolsize,poolsize,poolsize,poolsize))
-- if poolsize=2, then the output of this is 10x4x4
CNN:add(nn.Reshape(convDepth_L1*outputWdith_L2*outputWdith_L2,true))
first_part:add(CNN)
该代码在输入张量为 2x1x3x10x10 时有效,但在张量为 2x16x3x10x10 时无效。
编辑:我才意识到当我做 model:backward 而不是 model:forward 时会发生这种情况。这是相关代码:
local y = model:forward(x)
local E = loss:forward(y,yt)
-- estimate df/dW
local dE_dy = loss:backward(y,yt)
print(dE_dy)
model:backward(x,dE_dy)
x 是 2x16x3x10x10 张量,dE_dy 是 16x2。
这是 torch.nn
库中的一个缺陷。为了执行后退步骤,nn.Parallel
将从更高模块接收到的 gradOutput
拆分成多个部分,并将它们发送到其并行子模块。拆分是在不复制内存的情况下有效完成的,因此这些部分是 non-contiguous(除非你在第一个维度上拆分)。
local first_part = nn.Parallel(1,2)
-- ^
-- Merging on the 2nd dimension;
-- Chunks of splitted gradOutput will not be contiguous
问题是 nn.Sum
无法与不连续的 gradOutput
一起使用。我没有比对其进行更改更好的主意了:
Sum_nc, _ = torch.class('nn.Sum_nc', 'nn.Sum')
function Sum_nc:updateGradInput(input, gradOutput)
local size = input:size()
size[self.dimension] = 1
-- modified code:
if gradOutput:isContiguous() then
gradOutput = gradOutput:view(size) -- doesn't work with non-contiguous tensors
else
gradOutput = gradOutput:resize(size) -- slower because of memory reallocation and changes gradOutput
-- gradOutput = gradOutput:clone():resize(size) -- doesn't change gradOutput; safer and even slower
end
--
self.gradInput:resizeAs(input)
self.gradInput:copy(gradOutput:expandAs(input))
return self.gradInput
end
[...]
sums = nn.Sequential()
sums:add(nn.Sum_nc(3)) -- <- will use torch.view
sums:add(nn.Sum_nc(3)) -- <- will use torch.resize
我有一个 2x16x3x10x10 张量输入我的网络。我的网络有两个并行工作的部分。第一部分采用 16x3x10x10 矩阵并计算最后两个维度的总和,返回 16x3 张量。 第二部分是产生 16x160 张量的卷积神经网络。 每当我尝试 运行 这个模型时,我都会收到以下错误:
...903/nTorch/Torch7/install/share/lua/5.1/torch/Tensor.lua:457: expecting a contiguous tensor
stack traceback:
[C]: in function 'assert'
...903/nTorch/Torch7/install/share/lua/5.1/torch/Tensor.lua:457: in function 'view'
...8/osu7903/nTorch/Torch7/install/share/lua/5.1/nn/Sum.lua:26: in function 'updateGradInput'
...03/nTorch/Torch7/install/share/lua/5.1/nn/Sequential.lua:40: in function 'updateGradInput'
...7903/nTorch/Torch7/install/share/lua/5.1/nn/Parallel.lua:52: in function 'updateGradInput'
...su7903/nTorch/Torch7/install/share/lua/5.1/nn/Module.lua:30: in function 'backward'
...03/nTorch/Torch7/install/share/lua/5.1/nn/Sequential.lua:73: in function 'backward'
./train_v2_with_batch.lua:144: in function 'opfunc'
...su7903/nTorch/Torch7/install/share/lua/5.1/optim/sgd.lua:43: in function 'sgd'
./train_v2_with_batch.lua:160: in function 'train'
run.lua:93: in main chunk
[C]: in function 'dofile'
...rch/Torch7/install/lib/luarocks/rocks/trepl/scm-1/bin/th:131: in main chunk
[C]: at 0x00405800
这是模型的相关部分:
local first_part = nn.Parallel(1,2)
local CNN = nn.Sequential()
local sums = nn.Sequential()
sums:add(nn.Sum(3))
sums:add(nn.Sum(3))
first_part:add(sums)
-- stage 1: conv+max
CNN:add(nn.SpatialConvolutionMM(nfeats, convDepth_L1,receptiveFieldWidth_L1,receptiveFieldHeight_L1))
-- Since the default stride of the receptive field is 1, then
-- (assuming receptiveFieldWidth_L1 = receptiveFieldHeight_L1 = 3) the number of receptive fields is (10-3+1)x(10-3+1) or 8x8
-- so the output volume is (convDepth_L1 X 8 X 8) or 10 x 8 x 8
--CNN:add(nn.Threshold())
CNN:add(nn.ReLU())
CNN:add(nn.SpatialMaxPooling(poolsize,poolsize,poolsize,poolsize))
-- if poolsize=2, then the output of this is 10x4x4
CNN:add(nn.Reshape(convDepth_L1*outputWdith_L2*outputWdith_L2,true))
first_part:add(CNN)
该代码在输入张量为 2x1x3x10x10 时有效,但在张量为 2x16x3x10x10 时无效。
编辑:我才意识到当我做 model:backward 而不是 model:forward 时会发生这种情况。这是相关代码:
local y = model:forward(x)
local E = loss:forward(y,yt)
-- estimate df/dW
local dE_dy = loss:backward(y,yt)
print(dE_dy)
model:backward(x,dE_dy)
x 是 2x16x3x10x10 张量,dE_dy 是 16x2。
这是 torch.nn
库中的一个缺陷。为了执行后退步骤,nn.Parallel
将从更高模块接收到的 gradOutput
拆分成多个部分,并将它们发送到其并行子模块。拆分是在不复制内存的情况下有效完成的,因此这些部分是 non-contiguous(除非你在第一个维度上拆分)。
local first_part = nn.Parallel(1,2)
-- ^
-- Merging on the 2nd dimension;
-- Chunks of splitted gradOutput will not be contiguous
问题是 nn.Sum
无法与不连续的 gradOutput
一起使用。我没有比对其进行更改更好的主意了:
Sum_nc, _ = torch.class('nn.Sum_nc', 'nn.Sum')
function Sum_nc:updateGradInput(input, gradOutput)
local size = input:size()
size[self.dimension] = 1
-- modified code:
if gradOutput:isContiguous() then
gradOutput = gradOutput:view(size) -- doesn't work with non-contiguous tensors
else
gradOutput = gradOutput:resize(size) -- slower because of memory reallocation and changes gradOutput
-- gradOutput = gradOutput:clone():resize(size) -- doesn't change gradOutput; safer and even slower
end
--
self.gradInput:resizeAs(input)
self.gradInput:copy(gradOutput:expandAs(input))
return self.gradInput
end
[...]
sums = nn.Sequential()
sums:add(nn.Sum_nc(3)) -- <- will use torch.view
sums:add(nn.Sum_nc(3)) -- <- will use torch.resize