从预训练模型、迁移学习、tensorflow (load_model) 中移除顶层
Remove top layer from pre-trained model, transfer learning, tensorflow (load_model)
我已经用两个 类 预训练了一个模型(我自己保存的模型),我想将其用于迁移学习以训练一个具有六个 类 的模型。
我已将预训练模型加载到新训练脚本中:
base_model = tf.keras.models.load_model("base_model_path")
如何删除 top/head 层(conv1D 层)?
我看到在 keras 中可以使用 base_model.pop(),对于 tf.keras.applications 可以简单地使用 include_top=false
但是使用 tf.keras 和 load_model 时有类似的东西吗?
(我试过这样的事情:
for layer in base_model.layers[:-1]:
layer.trainable = False`
然后将其添加到新模型(?)但我不确定如何继续)
感谢您的帮助!
您可以尝试这样的操作:
基础模型由一个简单的 Conv1D
网络组成,输出层有两个 类:
import tensorflow as tf
samples = 100
timesteps = 5
features = 2
classes = 2
dummy_x, dummy_y = tf.random.normal((100, 5, 2)), tf.random.uniform((100, 1), maxval=2, dtype=tf.int32)
base_model = tf.keras.Sequential()
base_model.add(tf.keras.layers.Conv1D(32, 3, activation='relu', input_shape=(5, 2)))
base_model.add(tf.keras.layers.GlobalMaxPool1D())
base_model.add(tf.keras.layers.Dense(32, activation='relu'))
base_model.add( tf.keras.layers.Dense(classes, activation='softmax'))
base_model.compile(optimizer='adam', loss = tf.keras.losses.SparseCategoricalCrossentropy())
print(base_model.summary())
base_model.fit(dummy_x, dummy_y, batch_size=16, epochs=1)
base_model.save("base_model")
base_model = tf.keras.models.load_model("base_model")
Model: "sequential_8"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv1d_31 (Conv1D) (None, 3, 32) 224
global_max_pooling1d_13 (Gl (None, 32) 0
obalMaxPooling1D)
dense_17 (Dense) (None, 32) 1056
dense_18 (Dense) (None, 2) 66
=================================================================
Total params: 1,346
Trainable params: 1,346
Non-trainable params: 0
_________________________________________________________________
None
7/7 [==============================] - 0s 3ms/step - loss: 0.6973
INFO:tensorflow:Assets written to: base_model/assets
新模型也是由一个简单的 Conv1D
网络组成,但输出层有六个 类。它还包含 base_model
的所有层,除了第一个 Conv1D
层和最后一个输出层:
classes = 6
dummy_x, dummy_y = tf.random.normal((100, 5, 2)), tf.random.uniform((100, 1), maxval=6, dtype=tf.int32)
model = tf.keras.Sequential()
model.add(tf.keras.layers.Conv1D(64, 3, activation='relu', input_shape=(5, 2)))
model.add(tf.keras.layers.Conv1D(32, 2, activation='relu'))
for layer in base_model.layers[1:-1]: # Skip first and last layer
model.add(layer)
model.add(tf.keras.layers.Dense(classes, activation='softmax'))
model.compile(optimizer='adam', loss = tf.keras.losses.SparseCategoricalCrossentropy())
print(model.summary())
model.fit(dummy_x, dummy_y, batch_size=16, epochs=1)
Model: "sequential_9"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv1d_32 (Conv1D) (None, 3, 64) 448
conv1d_33 (Conv1D) (None, 2, 32) 4128
global_max_pooling1d_13 (Gl (None, 32) 0
obalMaxPooling1D)
dense_17 (Dense) (None, 32) 1056
dense_19 (Dense) (None, 6) 198
=================================================================
Total params: 5,830
Trainable params: 5,830
Non-trainable params: 0
_________________________________________________________________
None
7/7 [==============================] - 0s 3ms/step - loss: 1.8069
<keras.callbacks.History at 0x7f90c87a3c50>
我已经用两个 类 预训练了一个模型(我自己保存的模型),我想将其用于迁移学习以训练一个具有六个 类 的模型。 我已将预训练模型加载到新训练脚本中:
base_model = tf.keras.models.load_model("base_model_path")
如何删除 top/head 层(conv1D 层)?
我看到在 keras 中可以使用 base_model.pop(),对于 tf.keras.applications 可以简单地使用 include_top=false
但是使用 tf.keras 和 load_model 时有类似的东西吗?
(我试过这样的事情:
for layer in base_model.layers[:-1]:
layer.trainable = False`
然后将其添加到新模型(?)但我不确定如何继续)
感谢您的帮助!
您可以尝试这样的操作:
基础模型由一个简单的 Conv1D
网络组成,输出层有两个 类:
import tensorflow as tf
samples = 100
timesteps = 5
features = 2
classes = 2
dummy_x, dummy_y = tf.random.normal((100, 5, 2)), tf.random.uniform((100, 1), maxval=2, dtype=tf.int32)
base_model = tf.keras.Sequential()
base_model.add(tf.keras.layers.Conv1D(32, 3, activation='relu', input_shape=(5, 2)))
base_model.add(tf.keras.layers.GlobalMaxPool1D())
base_model.add(tf.keras.layers.Dense(32, activation='relu'))
base_model.add( tf.keras.layers.Dense(classes, activation='softmax'))
base_model.compile(optimizer='adam', loss = tf.keras.losses.SparseCategoricalCrossentropy())
print(base_model.summary())
base_model.fit(dummy_x, dummy_y, batch_size=16, epochs=1)
base_model.save("base_model")
base_model = tf.keras.models.load_model("base_model")
Model: "sequential_8"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv1d_31 (Conv1D) (None, 3, 32) 224
global_max_pooling1d_13 (Gl (None, 32) 0
obalMaxPooling1D)
dense_17 (Dense) (None, 32) 1056
dense_18 (Dense) (None, 2) 66
=================================================================
Total params: 1,346
Trainable params: 1,346
Non-trainable params: 0
_________________________________________________________________
None
7/7 [==============================] - 0s 3ms/step - loss: 0.6973
INFO:tensorflow:Assets written to: base_model/assets
新模型也是由一个简单的 Conv1D
网络组成,但输出层有六个 类。它还包含 base_model
的所有层,除了第一个 Conv1D
层和最后一个输出层:
classes = 6
dummy_x, dummy_y = tf.random.normal((100, 5, 2)), tf.random.uniform((100, 1), maxval=6, dtype=tf.int32)
model = tf.keras.Sequential()
model.add(tf.keras.layers.Conv1D(64, 3, activation='relu', input_shape=(5, 2)))
model.add(tf.keras.layers.Conv1D(32, 2, activation='relu'))
for layer in base_model.layers[1:-1]: # Skip first and last layer
model.add(layer)
model.add(tf.keras.layers.Dense(classes, activation='softmax'))
model.compile(optimizer='adam', loss = tf.keras.losses.SparseCategoricalCrossentropy())
print(model.summary())
model.fit(dummy_x, dummy_y, batch_size=16, epochs=1)
Model: "sequential_9"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv1d_32 (Conv1D) (None, 3, 64) 448
conv1d_33 (Conv1D) (None, 2, 32) 4128
global_max_pooling1d_13 (Gl (None, 32) 0
obalMaxPooling1D)
dense_17 (Dense) (None, 32) 1056
dense_19 (Dense) (None, 6) 198
=================================================================
Total params: 5,830
Trainable params: 5,830
Non-trainable params: 0
_________________________________________________________________
None
7/7 [==============================] - 0s 3ms/step - loss: 1.8069
<keras.callbacks.History at 0x7f90c87a3c50>