在 Keras 中将训练设置为 true 后如何再次禁用辍学?
How to disable dropout AGAIN after setting training to true in Keras?
我有以下代码:
def create_keras_model(num_classes):
"""
This function compiles and returns a Keras model.
Should be passed to KerasClassifier in the Keras scikit-learn API.
"""
input_shape = (28, 28, 1)
x_in = keras.Input(shape=input_shape)
x = layers.Conv2D(32, kernel_size=(3, 3), activation="relu")(x_in)
x = layers.Dropout(0.25)(x,training=True)
x = layers.MaxPool2D(pool_size=(2, 2))(x)
x = layers.Conv2D(64, kernel_size=(3, 3), activation="relu")(x)
x = layers.Dropout(0.25)(x,training=True)
x = layers.MaxPool2D(pool_size=(2, 2))(x)
x = layers.Flatten()(x)
x = layers.Dropout(0.5)(x,training=True)
x = layers.Dense(num_classes)(x)
model = Model(inputs=x_in, outputs=x)
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
return model
为了我的目的,我需要 training=True。但是,出于这个目的,我需要在 Dropout-Layers 中使用 training=False。有没有办法轻松实现?
一种方法是加载模型权重并将它们加载到第二个模型中,该模型首先没有任何 Dropout-Layer,但这似乎过于复杂。
像这样设置“trainable=False”:
model.layers[-2].training = False
model.layers[-5].training = False
model.layers[-8].training = False
不起作用。对相同的输入数据多次调用 predict 仍然会产生不同的结果。
IIUC,您可以尝试通过创建新模型在推理过程中省略 Dropout
层:
import tensorflow as tf
def create_keras_model(num_classes):
"""
This function compiles and returns a Keras model.
Should be passed to KerasClassifier in the Keras scikit-learn API.
"""
input_shape = (28, 28, 1)
x_in = tf.keras.Input(shape=input_shape)
x = tf.keras.layers.Conv2D(32, kernel_size=(3, 3), activation="relu")(x_in)
x = tf.keras.layers.Dropout(0.25)(x,training=True)
x = tf.keras.layers.MaxPool2D(pool_size=(2, 2))(x)
x = tf.keras.layers.Conv2D(64, kernel_size=(3, 3), activation="relu")(x)
x = tf.keras.layers.Dropout(0.25)(x,training=True)
x = tf.keras.layers.MaxPool2D(pool_size=(2, 2))(x)
x = tf.keras.layers.Flatten()(x)
x = tf.keras.layers.Dropout(0.5)(x,training=True)
x = tf.keras.layers.Dense(num_classes)(x)
model = tf.keras.Model(inputs=x_in, outputs=x)
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
return model
model = create_keras_model(1)
new_model = tf.keras.Sequential()
for idx, l in enumerate(model.layers):
if not l.name.startswith('dropout'):
if idx==0:
new_model.add(tf.keras.layers.InputLayer(input_shape=(28, 28, 1)))
new_model.add(l)
您还可以尝试做的是在推理过程中将丢失率设置为零:
for layer in model.layers:
if isinstance(layer, tf.keras.layers.Dropout) and hasattr(layer, 'rate'):
layer.rate = 0.0
同时检查 docs 关于 Dropout
层:
Note that the Dropout layer only applies when training is set to True
such that no values are dropped during inference. When using
model.fit, training will be appropriately set to True automatically,
and in other contexts, you can set the kwarg explicitly to True when
calling the layer.
(This is in contrast to setting trainable=False for a Dropout layer.
trainable does not affect the layer's behavior, as Dropout does not
have any variables/weights that can be frozen during training.)
如评论中所述,您可以在循环之前创建两个模型,一个有 dropout,一个没有 dropout,然后使用:model2.set_weights(model.get_weights())
.
我有以下代码:
def create_keras_model(num_classes):
"""
This function compiles and returns a Keras model.
Should be passed to KerasClassifier in the Keras scikit-learn API.
"""
input_shape = (28, 28, 1)
x_in = keras.Input(shape=input_shape)
x = layers.Conv2D(32, kernel_size=(3, 3), activation="relu")(x_in)
x = layers.Dropout(0.25)(x,training=True)
x = layers.MaxPool2D(pool_size=(2, 2))(x)
x = layers.Conv2D(64, kernel_size=(3, 3), activation="relu")(x)
x = layers.Dropout(0.25)(x,training=True)
x = layers.MaxPool2D(pool_size=(2, 2))(x)
x = layers.Flatten()(x)
x = layers.Dropout(0.5)(x,training=True)
x = layers.Dense(num_classes)(x)
model = Model(inputs=x_in, outputs=x)
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
return model
为了我的目的,我需要 training=True。但是,出于这个目的,我需要在 Dropout-Layers 中使用 training=False。有没有办法轻松实现?
一种方法是加载模型权重并将它们加载到第二个模型中,该模型首先没有任何 Dropout-Layer,但这似乎过于复杂。
像这样设置“trainable=False”:
model.layers[-2].training = False
model.layers[-5].training = False
model.layers[-8].training = False
不起作用。对相同的输入数据多次调用 predict 仍然会产生不同的结果。
IIUC,您可以尝试通过创建新模型在推理过程中省略 Dropout
层:
import tensorflow as tf
def create_keras_model(num_classes):
"""
This function compiles and returns a Keras model.
Should be passed to KerasClassifier in the Keras scikit-learn API.
"""
input_shape = (28, 28, 1)
x_in = tf.keras.Input(shape=input_shape)
x = tf.keras.layers.Conv2D(32, kernel_size=(3, 3), activation="relu")(x_in)
x = tf.keras.layers.Dropout(0.25)(x,training=True)
x = tf.keras.layers.MaxPool2D(pool_size=(2, 2))(x)
x = tf.keras.layers.Conv2D(64, kernel_size=(3, 3), activation="relu")(x)
x = tf.keras.layers.Dropout(0.25)(x,training=True)
x = tf.keras.layers.MaxPool2D(pool_size=(2, 2))(x)
x = tf.keras.layers.Flatten()(x)
x = tf.keras.layers.Dropout(0.5)(x,training=True)
x = tf.keras.layers.Dense(num_classes)(x)
model = tf.keras.Model(inputs=x_in, outputs=x)
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
return model
model = create_keras_model(1)
new_model = tf.keras.Sequential()
for idx, l in enumerate(model.layers):
if not l.name.startswith('dropout'):
if idx==0:
new_model.add(tf.keras.layers.InputLayer(input_shape=(28, 28, 1)))
new_model.add(l)
您还可以尝试做的是在推理过程中将丢失率设置为零:
for layer in model.layers:
if isinstance(layer, tf.keras.layers.Dropout) and hasattr(layer, 'rate'):
layer.rate = 0.0
同时检查 docs 关于 Dropout
层:
Note that the Dropout layer only applies when training is set to True such that no values are dropped during inference. When using model.fit, training will be appropriately set to True automatically, and in other contexts, you can set the kwarg explicitly to True when calling the layer.
(This is in contrast to setting trainable=False for a Dropout layer. trainable does not affect the layer's behavior, as Dropout does not have any variables/weights that can be frozen during training.)
如评论中所述,您可以在循环之前创建两个模型,一个有 dropout,一个没有 dropout,然后使用:model2.set_weights(model.get_weights())
.