Keras:在 CNN 中将前一层的一部分馈送到下一层
Keras: Feeding in part of previous layer to next layer, in CNN
我正在尝试将前一层的各个内核输出馈送到新的 conv 过滤器,以获得下一层。为此,我尝试通过 Conv2D
传递每个内核输出,通过它们的索引调用它们。我使用的函数是:
def modification(weights_path=None, classes=2):
###########
## Input ##
###########
### 224x224x3 sized RGB Input
inputs = Input(shape=(224,224,3))
#################################
## Conv2D Layer with 5 kernels ##
#################################
k = 5
x = Conv2D(k, (3,3), data_format='channels_last', padding='same', name='block1_conv1')(inputs)
y = np.empty(k, dtype=object)
for i in range(0,k):
y[i] = Conv2D(1, (3,3), data_format='channels_last', padding='same')(np.asarray([x[i]]))
y = keras.layers.concatenate([y[i] for i in range (0,k)], axis=3, name='block1_conv1_loc')
out = Activation('relu')(y)
print ('Output shape is, ' +str(out.get_shape()))
### Maxpooling(2,2) with a stride of (2,2)
out = MaxPooling2D((2,2), strides=(2,2), data_format='channels_last')(out)
############################################
## Top layer, with fully connected layers ##
############################################
out = Flatten(name='flatten')(out)
out = Dense(4096, activation='relu', name='fc1')(out)
out = Dropout(0.5)(out)
out = Dense(4096, activation='relu', name='fc2')(out)
out = Dropout(0.5)(out)
out = Dense(classes, activation='softmax', name='predictions')(out)
if weights_path:
model.load_weights(weights_path)
model = Model(inputs, out, name='modification')
return model
但这不起作用,并抛出以下错误:
Traceback (most recent call last):
File "sim-conn-edit.py", line 137, in <module>
model = modification()
File "sim-conn-edit.py", line 38, in modification
y[i] = Conv2D(1, (3,3), data_format='channels_last', padding='same')(np.asarray([x[i]]))
File "/home/yx96/anaconda2/lib/python2.7/site-packages/keras/engine/topology.py", line 511, in __call__
self.assert_input_compatibility(inputs)
File "/home/yx96/anaconda2/lib/python2.7/site-packages/keras/engine/topology.py", line 408, in assert_input_compatibil
ity
if K.ndim(x) != spec.ndim:
File "/home/yx96/anaconda2/lib/python2.7/site-packages/keras/backend/tensorflow_backend.py", line 437, in ndim
dims = x.get_shape()._dims
AttributeError: 'numpy.ndarray' object has no attribute 'get_shape'
我将 x[i]
层输入为 [ x[i] ]
以满足 Conv2D
层的尺寸要求。任何解决此问题的帮助将不胜感激!
在 Whosebug 中发布 and 个问题并进行一些个人探索后,我想出了一个解决方案。可以用 Lambda
层来做到这一点;通过调用 Lambda
层来提取前一层的子部分。例如,如果 Lambda
函数定义为,
def layer_slice(x,i):
return x[:,:,:,i:i+1]
然后,称为
k = 5
x = Conv2D(k, (3,3), data_format='channels_last', padding='same', name='block1_conv1')(inputs)
y = np.empty(k, dtype=object)
for i in range(0,k):
y[i] = Lambda(layer_slice, arguments={'i':i})(x)
y[i] = Conv2D(1,(3,3), data_format='channels_last', padding='same')(y[i])
y = keras.layers.concatenate([y[i] for i in range (0,k)], axis=3, name='block1_conv1_loc')
out = Activation('relu')(y)
print ('Output shape is, ' +str(out.get_shape()))
它应该有效地将单个内核输出提供给新的 Conv2D
层。从 model.summary()
获得的层形状和相应数量的可训练参数符合预期。感谢 Daniel 指出 Lambda
层不能有可训练的权重。
普拉巴哈。我知道你已经解决了你的问题,但现在我看到了你的答案,你也可以在不使用 lambda 层的情况下做到这一点,只需将第一个 Conv2D 拆分为多个即可。一层有k个过滤器相当于k层有一个过滤器:
for i in range(0,k):
y[i] = Conv2D(1, (3,3), ... , name='block1_conv'+str(i))(inputs)
y[i] = Conv2D(1,(3,3), ...)(y[i])
y = Concatenate()([y[i] for i in range (0,k)])
out = Activation('relu')(y)
你可以数一数你的答案和这个答案的参数总数来比较。
我正在尝试将前一层的各个内核输出馈送到新的 conv 过滤器,以获得下一层。为此,我尝试通过 Conv2D
传递每个内核输出,通过它们的索引调用它们。我使用的函数是:
def modification(weights_path=None, classes=2):
###########
## Input ##
###########
### 224x224x3 sized RGB Input
inputs = Input(shape=(224,224,3))
#################################
## Conv2D Layer with 5 kernels ##
#################################
k = 5
x = Conv2D(k, (3,3), data_format='channels_last', padding='same', name='block1_conv1')(inputs)
y = np.empty(k, dtype=object)
for i in range(0,k):
y[i] = Conv2D(1, (3,3), data_format='channels_last', padding='same')(np.asarray([x[i]]))
y = keras.layers.concatenate([y[i] for i in range (0,k)], axis=3, name='block1_conv1_loc')
out = Activation('relu')(y)
print ('Output shape is, ' +str(out.get_shape()))
### Maxpooling(2,2) with a stride of (2,2)
out = MaxPooling2D((2,2), strides=(2,2), data_format='channels_last')(out)
############################################
## Top layer, with fully connected layers ##
############################################
out = Flatten(name='flatten')(out)
out = Dense(4096, activation='relu', name='fc1')(out)
out = Dropout(0.5)(out)
out = Dense(4096, activation='relu', name='fc2')(out)
out = Dropout(0.5)(out)
out = Dense(classes, activation='softmax', name='predictions')(out)
if weights_path:
model.load_weights(weights_path)
model = Model(inputs, out, name='modification')
return model
但这不起作用,并抛出以下错误:
Traceback (most recent call last):
File "sim-conn-edit.py", line 137, in <module>
model = modification()
File "sim-conn-edit.py", line 38, in modification
y[i] = Conv2D(1, (3,3), data_format='channels_last', padding='same')(np.asarray([x[i]]))
File "/home/yx96/anaconda2/lib/python2.7/site-packages/keras/engine/topology.py", line 511, in __call__
self.assert_input_compatibility(inputs)
File "/home/yx96/anaconda2/lib/python2.7/site-packages/keras/engine/topology.py", line 408, in assert_input_compatibil
ity
if K.ndim(x) != spec.ndim:
File "/home/yx96/anaconda2/lib/python2.7/site-packages/keras/backend/tensorflow_backend.py", line 437, in ndim
dims = x.get_shape()._dims
AttributeError: 'numpy.ndarray' object has no attribute 'get_shape'
我将 x[i]
层输入为 [ x[i] ]
以满足 Conv2D
层的尺寸要求。任何解决此问题的帮助将不胜感激!
在 Whosebug 中发布 Lambda
层来做到这一点;通过调用 Lambda
层来提取前一层的子部分。例如,如果 Lambda
函数定义为,
def layer_slice(x,i):
return x[:,:,:,i:i+1]
然后,称为
k = 5
x = Conv2D(k, (3,3), data_format='channels_last', padding='same', name='block1_conv1')(inputs)
y = np.empty(k, dtype=object)
for i in range(0,k):
y[i] = Lambda(layer_slice, arguments={'i':i})(x)
y[i] = Conv2D(1,(3,3), data_format='channels_last', padding='same')(y[i])
y = keras.layers.concatenate([y[i] for i in range (0,k)], axis=3, name='block1_conv1_loc')
out = Activation('relu')(y)
print ('Output shape is, ' +str(out.get_shape()))
它应该有效地将单个内核输出提供给新的 Conv2D
层。从 model.summary()
获得的层形状和相应数量的可训练参数符合预期。感谢 Daniel 指出 Lambda
层不能有可训练的权重。
普拉巴哈。我知道你已经解决了你的问题,但现在我看到了你的答案,你也可以在不使用 lambda 层的情况下做到这一点,只需将第一个 Conv2D 拆分为多个即可。一层有k个过滤器相当于k层有一个过滤器:
for i in range(0,k):
y[i] = Conv2D(1, (3,3), ... , name='block1_conv'+str(i))(inputs)
y[i] = Conv2D(1,(3,3), ...)(y[i])
y = Concatenate()([y[i] for i in range (0,k)])
out = Activation('relu')(y)
你可以数一数你的答案和这个答案的参数总数来比较。