将张量的各个通道传递到 Keras 中的层

Passing Individual Channels of Tensors to Layers in Keras

我正在尝试为 theano 后端模拟类似于 SeparableConvolution2D 层的东西(它已经存在于 TensorFlow 后端)。作为第一步,我需要做的是将一个通道从张量传递到下一层。假设我有一个名为 conv1 的 2D 卷积层,它有 16 个滤波器,它产生一个形状为的输出: (batch_size, 16, height, width) 我需要 select 形状为 (: , 0, : , : ) 并将其传递给下一层。够简单了吧?

这是我的代码:

from keras import backend as K

image_input = Input(batch_shape = (batch_size, 1, height, width ), name = 'image_input' )

conv1 = Convolution2D(16, 3, 3, name='conv1', activation = 'relu')(image_input)

conv2_input = K.reshape(conv1[:,0,:,:] , (batch_size, 1, height, width))

conv2 = Convolution2D(16, 3, 3, name='conv1', activation = 'relu')(conv2_input)

这抛出:

Exception: You tried to call layer "conv1". This layer has no information about its expected input shape, and thus cannot be built. You can build it manually via: layer.build(batch_input_shape)

为什么图层没有所需的形状信息?我正在使用 theano 后端的 reshape。这是将单个通道传递到下一层的正确方法吗?

我在 keras-user 组问了这个问题,在那里得到了答案:

https://groups.google.com/forum/#!topic/keras-users/bbQ5CbVXT1E

引用它:

You need to use a lambda layer, like: Lambda(x: x[:, 0:1, :, :], output_shape=lambda x: (x[0], 1, x[2], x[3]))

Note that such a manual implementation of a separable convolution would be horribly inefficient. The correct solution is to use the TensorFlow backend.