如何将具有非顺序架构(如 ResNet)的 Keras 模型拆分为子模型?
How to split a Keras model, with a non-sequential architecture like ResNet, into sub-models?
我的模型是 resnet-152 我想把它分成两个子模型,问题是第二个我不知道如何构建从中间层到输出的模型
我尝试了 中的这段代码,但它对我不起作用这是我的代码:
def getLayerIndexByName(model, layername):
for idx, layer in enumerate(model.layers):
if layer.name == layername:
return idx
idx = getLayerIndexByName(resnet, 'res3a_branch2a')
input_shape = resnet.layers[idx].get_input_shape_at(0) # which is here in my case (None, 55, 55, 256)
layer_input = Input(shape=input_shape[1:]) # as keras will add the batch shape
# create the new nodes for each layer in the path
x = layer_input
for layer in resnet.layers[idx:]:
x = layer(x)
# create the model
new_model = Model(layer_input, x)
我收到这个错误:
ValueError: Input 0 is incompatible with layer res3a_branch1: expected axis -1 of input shape to have value 256 but got shape (None, 28, 28, 512).
我也试过这个功能:
def split(model, start, end):
confs = model.get_config()
kept_layers = set()
for i, l in enumerate(confs['layers']):
if i == 0:
confs['layers'][0]['config']['batch_input_shape'] = model.layers[start].input_shape
if i != start:
confs['layers'][0]['name'] += str(random.randint(0, 100000000)) # rename the input layer to avoid conflicts on merge
confs['layers'][0]['config']['name'] = confs['layers'][0]['name']
elif i < start or i > end:
continue
kept_layers.add(l['name'])
# filter layers
layers = [l for l in confs['layers'] if l['name'] in kept_layers]
layers[1]['inbound_nodes'][0][0][0] = layers[0]['name']
# set conf
confs['layers'] = layers
confs['input_layers'][0][0] = layers[0]['name']
confs['output_layers'][0][0] = layers[-1]['name']
# create new model
submodel = Model.from_config(confs)
for l in submodel.layers:
orig_l = model.get_layer(l.name)
if orig_l is not None:
l.set_weights(orig_l.get_weights())
return submodel
我收到这个错误:
ValueError: Unknown layer: Scale
因为我的 resnet152 包含一个 Scale 层。
这是一个工作版本:
import resnet # pip install resnet
from keras.models import Model
from keras.layers import Input
def getLayerIndexByName(model, layername):
for idx, layer in enumerate(model.layers):
if layer.name == layername:
return idx
resnet = resnet.ResNet152(weights='imagenet')
idx = getLayerIndexByName(resnet, 'res3a_branch2a')
model1 = Model(inputs=resnet.input, outputs=resnet.get_layer('res3a_branch2a').output)
input_shape = resnet.layers[idx].get_input_shape_at(0) # get the input shape of desired layer
print(input_shape[1:])
layer_input = Input(shape=input_shape[1:]) # a new input tensor to be able to feed the desired layer
# create the new nodes for each layer in the path
x = layer_input
for layer in resnet.layers[idx:]:
x = layer(x)
# create the model
model2 = Model(layer_input, x)
model2.summary()
这是错误:
ValueError: Input 0 is incompatible with layer res3a_branch1: expected axis -1 of input shape to have value 256 but got shape (None, 28, 28, 512)
正如我在评论部分提到的,因为 ResNet 模型没有线性架构(即它有跳跃连接,一个层可能连接到多个层),你不能简单地遍历在一个循环中一个接一个地建模,并在循环中的前一层的输出上应用一层(即不同于 具有线性架构的模型)。
因此,您需要找到层的连通性并遍历该连通性图,才能构建原始模型的 sub-model。目前,我想到了这个解决方案:
- 指定你的sub-model的最后一层。
- 从该层开始,找到所有连接到它的层。
- 获取那些连接层的输出。
- 在收集的输出上应用最后一层。
显然步骤 #3 意味着递归:要获得连接层的输出(即 X),我们首先需要找到它们的连接层(即 Y),获得它们的输出(即 Y 的输出)然后应用它们在这些输出上(即在 Y 的输出上应用 X)。此外,要找到连接层,您需要了解一些 Keras 的内部结构,这已在 中介绍。所以我们想出了这个解决方案:
from keras.applications.resnet50 import ResNet50
from keras import models
from keras import layers
resnet = ResNet50()
# this is the split point, i.e. the starting layer in our sub-model
starting_layer_name = 'activation_46'
# create a new input layer for our sub-model we want to construct
new_input = layers.Input(batch_shape=resnet.get_layer(starting_layer_name).get_input_shape_at(0))
layer_outputs = {}
def get_output_of_layer(layer):
# if we have already applied this layer on its input(s) tensors,
# just return its already computed output
if layer.name in layer_outputs:
return layer_outputs[layer.name]
# if this is the starting layer, then apply it on the input tensor
if layer.name == starting_layer_name:
out = layer(new_input)
layer_outputs[layer.name] = out
return out
# find all the connected layers which this layer
# consumes their output
prev_layers = []
for node in layer._inbound_nodes:
prev_layers.extend(node.inbound_layers)
# get the output of connected layers
pl_outs = []
for pl in prev_layers:
pl_outs.extend([get_output_of_layer(pl)])
# apply this layer on the collected outputs
out = layer(pl_outs[0] if len(pl_outs) == 1 else pl_outs)
layer_outputs[layer.name] = out
return out
# note that we start from the last layer of our desired sub-model.
# this layer could be any layer of the original model as long as it is
# reachable from the starting layer
new_output = get_output_of_layer(resnet.layers[-1])
# create the sub-model
model = models.Model(new_input, new_output)
重要提示:
此解决方案假设原始模型中的每一层仅被使用一次,即它不适用于可以共享一个层的 Siamese 网络,因此可能会在不同的网络上多次应用输入张量。
如果你想将一个模型适当地分割成多个 sub-models,那么只使用那些层作为分割点是有意义的(例如 starting_layer_name
在上面的代码中)不在分支中(例如,在 ResNet 中,合并层之后的激活层是一个不错的选择,但是您选择的 res3a_branch2a
不是一个好的选项,因为它在一个分支中)。为了更好地了解模型的原始架构,您始终可以使用 plot_model()
效用函数绘制其图表:
from keras.applications.resnet50 import ResNet50
from keras.utils import plot_model
resnet = ResNet50()
plot_model(model, to_file='resnet_model.png')
由于新节点是在构造一个sub-model之后创建的,所以不要尝试构造另一个sub-model有重叠(即如果它没有重叠,没关系!)与前面的 sub-model 在上面代码的相同 运行 中 ;否则,您可能会遇到错误。
我在为迁移学习切片 Inception CNN 时遇到了类似的问题,只将某个点之后的层设置为可训练。
def get_layers_above(cutoff_layer,model):
def get_next_level(layer,model):
def wrap_list(val):
if type(val) is list:
return val
return [val]
r=[]
for output_t in wrap_list(layer.output):
r+=[x for x in model.layers if output_t.name in [y.name for y in wrap_list(x.input)]]
return r
visited=set()
to_visit=set([cutoff_layer])
while to_visit:
layer=to_visit.pop()
to_visit.update(get_next_level(layer,model))
visited.add(layer)
return list(visited)
我选择了迭代而不是递归解决方案,因为使用集合的广度优先遍历对于具有许多会聚分支的网络来说似乎是更安全的解决方案。
应该这样使用(以InceptionV3为例)
model = tf.keras.applications.InceptionV3(include_top=False,weights='imagenet',input_shape=(299,299,3))
layers=get_layers_above(model.get_layer('mixed9'),model)
print([l.name for l in layers])
输出
['batch_normalization_89',
'conv2d_93',
'activation_86',
'activation_91',
'mixed10',
'activation_88',
'batch_normalization_85',
'activation_93',
'batch_normalization_90',
'conv2d_87',
'conv2d_86',
'batch_normalization_86',
'activation_85',
'conv2d_91',
'batch_normalization_91',
'batch_normalization_87',
'activation_90',
'mixed9',
'batch_normalization_92',
'batch_normalization_88',
'activation_87',
'concatenate_1',
'activation_89',
'conv2d_88',
'conv2d_92',
'average_pooling2d_8',
'activation_92',
'mixed9_1',
'conv2d_89',
'conv2d_85',
'conv2d_90',
'batch_normalization_93']
在这种情况下,当有一个索引为 middle
的层时,它只连接上一层 (# middle-1
) 并且之后的所有层都没有直接连接到它之前的层,我们可以利用每个模型都保存为层列表的事实,并以这种方式创建两个部分模型:
model1 = keras.models.Model(inputs=model.input, outputs=model.layers[middle - 1].output)
input = keras.Input(shape=model.layers[middle-1].output_shape[1:])
# layers is a dict in the form {name : output}
layers = {}
layers[model.layers[middle-1].name] = input
for layer in model.layers[middle:]:
if type(layer.input) == list:
x = []
for layer_input in layer.input:
x.append(layers[layer_input.name.split('/')[0]])
else:
x = layers[layer.input.name.split('/')[0]]
y = layer(x)
layers[layer.name] = y
model2 = keras.Model(inputs = [input], outputs = [y])
然后很容易检查 model2.predict(model1.predict(x))
给出与 model.predict(x)
相同的结果
我的模型是 resnet-152 我想把它分成两个子模型,问题是第二个我不知道如何构建从中间层到输出的模型
我尝试了
def getLayerIndexByName(model, layername):
for idx, layer in enumerate(model.layers):
if layer.name == layername:
return idx
idx = getLayerIndexByName(resnet, 'res3a_branch2a')
input_shape = resnet.layers[idx].get_input_shape_at(0) # which is here in my case (None, 55, 55, 256)
layer_input = Input(shape=input_shape[1:]) # as keras will add the batch shape
# create the new nodes for each layer in the path
x = layer_input
for layer in resnet.layers[idx:]:
x = layer(x)
# create the model
new_model = Model(layer_input, x)
我收到这个错误:
ValueError: Input 0 is incompatible with layer res3a_branch1: expected axis -1 of input shape to have value 256 but got shape (None, 28, 28, 512).
我也试过这个功能:
def split(model, start, end):
confs = model.get_config()
kept_layers = set()
for i, l in enumerate(confs['layers']):
if i == 0:
confs['layers'][0]['config']['batch_input_shape'] = model.layers[start].input_shape
if i != start:
confs['layers'][0]['name'] += str(random.randint(0, 100000000)) # rename the input layer to avoid conflicts on merge
confs['layers'][0]['config']['name'] = confs['layers'][0]['name']
elif i < start or i > end:
continue
kept_layers.add(l['name'])
# filter layers
layers = [l for l in confs['layers'] if l['name'] in kept_layers]
layers[1]['inbound_nodes'][0][0][0] = layers[0]['name']
# set conf
confs['layers'] = layers
confs['input_layers'][0][0] = layers[0]['name']
confs['output_layers'][0][0] = layers[-1]['name']
# create new model
submodel = Model.from_config(confs)
for l in submodel.layers:
orig_l = model.get_layer(l.name)
if orig_l is not None:
l.set_weights(orig_l.get_weights())
return submodel
我收到这个错误:
ValueError: Unknown layer: Scale
因为我的 resnet152 包含一个 Scale 层。
这是一个工作版本:
import resnet # pip install resnet
from keras.models import Model
from keras.layers import Input
def getLayerIndexByName(model, layername):
for idx, layer in enumerate(model.layers):
if layer.name == layername:
return idx
resnet = resnet.ResNet152(weights='imagenet')
idx = getLayerIndexByName(resnet, 'res3a_branch2a')
model1 = Model(inputs=resnet.input, outputs=resnet.get_layer('res3a_branch2a').output)
input_shape = resnet.layers[idx].get_input_shape_at(0) # get the input shape of desired layer
print(input_shape[1:])
layer_input = Input(shape=input_shape[1:]) # a new input tensor to be able to feed the desired layer
# create the new nodes for each layer in the path
x = layer_input
for layer in resnet.layers[idx:]:
x = layer(x)
# create the model
model2 = Model(layer_input, x)
model2.summary()
这是错误:
ValueError: Input 0 is incompatible with layer res3a_branch1: expected axis -1 of input shape to have value 256 but got shape (None, 28, 28, 512)
正如我在评论部分提到的,因为 ResNet 模型没有线性架构(即它有跳跃连接,一个层可能连接到多个层),你不能简单地遍历在一个循环中一个接一个地建模,并在循环中的前一层的输出上应用一层(即不同于
因此,您需要找到层的连通性并遍历该连通性图,才能构建原始模型的 sub-model。目前,我想到了这个解决方案:
- 指定你的sub-model的最后一层。
- 从该层开始,找到所有连接到它的层。
- 获取那些连接层的输出。
- 在收集的输出上应用最后一层。
显然步骤 #3 意味着递归:要获得连接层的输出(即 X),我们首先需要找到它们的连接层(即 Y),获得它们的输出(即 Y 的输出)然后应用它们在这些输出上(即在 Y 的输出上应用 X)。此外,要找到连接层,您需要了解一些 Keras 的内部结构,这已在
from keras.applications.resnet50 import ResNet50
from keras import models
from keras import layers
resnet = ResNet50()
# this is the split point, i.e. the starting layer in our sub-model
starting_layer_name = 'activation_46'
# create a new input layer for our sub-model we want to construct
new_input = layers.Input(batch_shape=resnet.get_layer(starting_layer_name).get_input_shape_at(0))
layer_outputs = {}
def get_output_of_layer(layer):
# if we have already applied this layer on its input(s) tensors,
# just return its already computed output
if layer.name in layer_outputs:
return layer_outputs[layer.name]
# if this is the starting layer, then apply it on the input tensor
if layer.name == starting_layer_name:
out = layer(new_input)
layer_outputs[layer.name] = out
return out
# find all the connected layers which this layer
# consumes their output
prev_layers = []
for node in layer._inbound_nodes:
prev_layers.extend(node.inbound_layers)
# get the output of connected layers
pl_outs = []
for pl in prev_layers:
pl_outs.extend([get_output_of_layer(pl)])
# apply this layer on the collected outputs
out = layer(pl_outs[0] if len(pl_outs) == 1 else pl_outs)
layer_outputs[layer.name] = out
return out
# note that we start from the last layer of our desired sub-model.
# this layer could be any layer of the original model as long as it is
# reachable from the starting layer
new_output = get_output_of_layer(resnet.layers[-1])
# create the sub-model
model = models.Model(new_input, new_output)
重要提示:
此解决方案假设原始模型中的每一层仅被使用一次,即它不适用于可以共享一个层的 Siamese 网络,因此可能会在不同的网络上多次应用输入张量。
如果你想将一个模型适当地分割成多个 sub-models,那么只使用那些层作为分割点是有意义的(例如
starting_layer_name
在上面的代码中)不在分支中(例如,在 ResNet 中,合并层之后的激活层是一个不错的选择,但是您选择的res3a_branch2a
不是一个好的选项,因为它在一个分支中)。为了更好地了解模型的原始架构,您始终可以使用plot_model()
效用函数绘制其图表:from keras.applications.resnet50 import ResNet50 from keras.utils import plot_model resnet = ResNet50() plot_model(model, to_file='resnet_model.png')
由于新节点是在构造一个sub-model之后创建的,所以不要尝试构造另一个sub-model有重叠(即如果它没有重叠,没关系!)与前面的 sub-model 在上面代码的相同 运行 中 ;否则,您可能会遇到错误。
我在为迁移学习切片 Inception CNN 时遇到了类似的问题,只将某个点之后的层设置为可训练。
def get_layers_above(cutoff_layer,model):
def get_next_level(layer,model):
def wrap_list(val):
if type(val) is list:
return val
return [val]
r=[]
for output_t in wrap_list(layer.output):
r+=[x for x in model.layers if output_t.name in [y.name for y in wrap_list(x.input)]]
return r
visited=set()
to_visit=set([cutoff_layer])
while to_visit:
layer=to_visit.pop()
to_visit.update(get_next_level(layer,model))
visited.add(layer)
return list(visited)
我选择了迭代而不是递归解决方案,因为使用集合的广度优先遍历对于具有许多会聚分支的网络来说似乎是更安全的解决方案。
应该这样使用(以InceptionV3为例)
model = tf.keras.applications.InceptionV3(include_top=False,weights='imagenet',input_shape=(299,299,3))
layers=get_layers_above(model.get_layer('mixed9'),model)
print([l.name for l in layers])
输出
['batch_normalization_89',
'conv2d_93',
'activation_86',
'activation_91',
'mixed10',
'activation_88',
'batch_normalization_85',
'activation_93',
'batch_normalization_90',
'conv2d_87',
'conv2d_86',
'batch_normalization_86',
'activation_85',
'conv2d_91',
'batch_normalization_91',
'batch_normalization_87',
'activation_90',
'mixed9',
'batch_normalization_92',
'batch_normalization_88',
'activation_87',
'concatenate_1',
'activation_89',
'conv2d_88',
'conv2d_92',
'average_pooling2d_8',
'activation_92',
'mixed9_1',
'conv2d_89',
'conv2d_85',
'conv2d_90',
'batch_normalization_93']
在这种情况下,当有一个索引为 middle
的层时,它只连接上一层 (# middle-1
) 并且之后的所有层都没有直接连接到它之前的层,我们可以利用每个模型都保存为层列表的事实,并以这种方式创建两个部分模型:
model1 = keras.models.Model(inputs=model.input, outputs=model.layers[middle - 1].output)
input = keras.Input(shape=model.layers[middle-1].output_shape[1:])
# layers is a dict in the form {name : output}
layers = {}
layers[model.layers[middle-1].name] = input
for layer in model.layers[middle:]:
if type(layer.input) == list:
x = []
for layer_input in layer.input:
x.append(layers[layer_input.name.split('/')[0]])
else:
x = layers[layer.input.name.split('/')[0]]
y = layer(x)
layers[layer.name] = y
model2 = keras.Model(inputs = [input], outputs = [y])
然后很容易检查 model2.predict(model1.predict(x))
给出与 model.predict(x)