如何在推理模式下设置基础模型?

How to setup a base model in inference mode?

Keras documentation 关于微调声明重要的是“通过在调用基础模型时传递 training=False 来保持 BatchNormalization 层处于推理模式。 ”。 (有趣的是,我发现的关于该主题的每个非官方示例都忽略了此设置。)

文档跟在示例之后:

from tensorflow import keras
from keras.applications.xception import Xception

base_model = keras.applications.Xception(
    weights='imagenet',  # Load weights pre-trained on ImageNet.
    input_shape=(150, 150, 3),
    include_top=False)  # Do not include the ImageNet classifier at the top.
base_model.trainable = False
inputs = keras.Input(shape=(150, 150, 3))
scale_layer = keras.layers.Rescaling(scale=1 / 127.5, offset=-1)
x = scale_layer(x)

# We make sure that the base_model is running in inference mode here,
# by passing `training=False`. This is important for fine-tuning, as you will
# learn in a few paragraphs.
x = base_model(x, training=False)

x = keras.layers.GlobalAveragePooling2D()(x)
outputs = keras.layers.Dense(1)(x)
model = keras.Model(inputs , outputs)

问题是该示例正在向基础模型添加预处理,而我的模型 (EfficientNetB3) 已经包含预处理,我不知道如何设置我的 base_model 为 `training=False``无需在其前面加上附加层:

base_model = EfficientNetB3(weights='imagenet', include_top=False, input_shape=input_shape)
base_model.trainable=False
model = Sequential()
model.add(base_model) # How to set base_model training=False?
model.add(GlobalAveragePooling2D())
model.add(Dropout(0.2))
model.add(Dense(10, activation="softmax", name="classifier"))

如何证明training=Falsetraining=True有效果:

@Frightera 向我解释了如何“锁定”模型的状态,我想通过检查 BatchNormalization 不可训练变量向自己证明锁定发生了。我的轻描淡写是,如果我用 training=True 调用模型,那么它应该更新变量。然而,事实并非如此,还是我漏掉了什么?

import tensorflow as tf
from tensorflow import keras
from keras.applications.efficientnet import EfficientNetB3
import numpy as np


class WrappedEffNet(keras.layers.Layer):
    
    def __init__(self, **kwargs):
        super(WrappedEffNet, self).__init__(**kwargs)
        self.model = EfficientNetB3(weights='imagenet', 
                                                       include_top=False,
                                                       input_shape=(224, 224, 3))
        self.model.trainable=False
    
    def call(self, x, training=False):
        return self.model(x, training=training) # Modified to pass also True.
    

base_model_wrapped = WrappedEffNet()

random_vector = tf.random.uniform((1, 224, 224, 3))

o1 = base_model_wrapped(random_vector)

o2 = base_model_wrapped(random_vector, training = False)

# Getting all non-trainable variable values from all BatchNormalization layers.
array_a = np.array([])
for layer in base_model_wrapped.model.layers:
    if hasattr(layer, 'moving_mean'):
        v = layer.moving_mean.numpy()
        np.concatenate([array_a, v])
        v = layer.moving_variance.numpy()
        np.concatenate([array_a, v])

o3 = base_model_wrapped(random_vector, training = True) # Changing to True, shouldn't this update BatchNormalization non-trainable variables?
array_b = np.array([])
for layer in base_model_wrapped.model.layers:
    if hasattr(layer, 'moving_mean'):
        v = layer.moving_mean.numpy()
        np.concatenate([array_b, v])
        v = layer.moving_variance.numpy()
        np.concatenate([array_b, v])

print(np.allclose(array_a, array_b)) # Shouldn't this be False?

无法像在函数式中那样在顺序模型中调用基础模型的 call 方法。但是,您可以将模型视为自定义层:

class WrappedEffNet(tf.keras.layers.Layer):
    
    def __init__(self, **kwargs):
        super(WrappedEffNet, self).__init__(**kwargs)
        self.model = keras.applications.EfficientNetB3(weights='imagenet', 
                                                       include_top=False,
                                                       input_shape=(224, 224, 3))
        self.model.trainable=False
    
    def call(self, x, training):
        return self.model(x, training=False)

完整性检查:

base_model_wrapped = WrappedEffNet()

random_vector = tf.random.uniform((1, 224, 224, 3))

o1 = base_model_wrapped(random_vector)
o2 = base_model_wrapped(random_vector, training = False)
o3 = base_model_wrapped(random_vector, training = True)

np.allclose(o1, o2), np.allclose(o1, o3), np.allclose(o2, o3)
# (True, True, True)

无论training的值如何都是推理模式。

模型摘要与顺序相同:

 Layer (type)                Output Shape              Param #   
=================================================================
 wrapped_eff_net (WrappedEff  (1, 7, 7, 1536)          10783535  
 Net)                                                            
                                                                 
 global_average_pooling2d (G  (1, 1536)                0         
 lobalAveragePooling2D)                                          
                                                                 
 dropout (Dropout)           (1, 1536)                 0         
                                                                 
 classifier (Dense)          (1, 10)                   15370     
                                                                 
=================================================================
Total params: 10,798,905
Trainable params: 15,370
Non-trainable params: 10,783,535
_________________________________________________________________

编辑:为了查看 BatchNormalization 的区别:

import tensorflow as tf
import numpy as np

x = np.random.randn(1, 2) * 20 + 0.1

bn = tf.keras.layers.BatchNormalization()
input_layer = tf.keras.layers.Input((x.shape[-1], ))
output = bn(input_layer )

model = tf.keras.Model(inputs=input_layer , outputs=output)

model.trainable = False:

model.trainable = False
for i in range(2):
    print('Input:', x)
    print('Moving mean:', model.layers[1].moving_mean.numpy())
    print('training = True -->', model(x, training = True).numpy())
    print('training = False -->', model(x, training = False).numpy())
    print()

Input: [[ 2.50317905 12.44406219]]
Moving mean: [0. 0.]
training = True --> [[ 2.5019286 12.437845 ]]
training = False --> [[ 2.5019286 12.437845 ]]

Input: [[ 2.50317905 12.44406219]]
Moving mean: [0. 0.]
training = True --> [[ 2.5019286 12.437845 ]]
training = False --> [[ 2.5019286 12.437845 ]]

model.trainable = True, training = True:

model.trainable = True
for i in range(2):
    print('Input:', x)
    print('Moving mean:', model.layers[1].moving_mean.numpy())
    print('training = True -->', model(x, training = True).numpy())
    print()

Input: [[ 2.50317905 12.44406219]]
Moving mean: [0. 0.]
training = True --> [[0. 0.]]

Input: [[ 2.50317905 12.44406219]]
Moving mean: [0.02503179 0.12444062]
training = True --> [[0. 0.]]

model.trainable = True, training = False:

model.trainable = True
for i in range(2):
    print('Input:', x)
    print('Moving mean:', model.layers[1].moving_mean.numpy())
    print('training = False -->', model(x, training = False).numpy())
    print()

Input: [[ 2.50317905 12.44406219]]
Moving mean: [0.04981326 0.24763682]
training = False --> [[ 2.476884 12.313342]]

Input: [[ 2.50317905 12.44406219]]
Moving mean: [0.04981326 0.24763682]
training = False --> [[ 2.476884 12.313342]]