VGG16 迁移学习可变输出
VGG16 Transfer Learning varying output
在使用 VGG16 进行迁移学习时观察到奇怪的行为。
model = VGG16(weights='imagenet',include_top=True)
model.layers.pop()
model.layers.pop()
for layer in model.layers:
layer.trainable=False
new_layer = Dense(2,activation='softmax')
inp = model.input
out = new_layer(model.layers[-1].output)
model = Model(inp,out)
然而,当使用 model.predict(image)
时,输出在分类方面会有所不同,即有时它将图像分类为 Class 1,而下一次将同一图像分类为 Class 2.
那是因为你没有设置种子。试试这个
import numpy as np
seed_value = 0
np.random.seed(seed_value)
model = VGG16(weights='imagenet',include_top=True)
model.layers.pop()
model.layers.pop()
for layer in model.layers:
layer.trainable=False
new_layer = Dense(2, activation='softmax',
kernel_initializer=keras.initializers.glorot_normal(seed=seed_value),
bias_initializer=keras.initializers.Zeros())
inp = model.input
out = new_layer(model.layers[-1].output)
model = Model(inp,out)
在使用 VGG16 进行迁移学习时观察到奇怪的行为。
model = VGG16(weights='imagenet',include_top=True)
model.layers.pop()
model.layers.pop()
for layer in model.layers:
layer.trainable=False
new_layer = Dense(2,activation='softmax')
inp = model.input
out = new_layer(model.layers[-1].output)
model = Model(inp,out)
然而,当使用 model.predict(image)
时,输出在分类方面会有所不同,即有时它将图像分类为 Class 1,而下一次将同一图像分类为 Class 2.
那是因为你没有设置种子。试试这个
import numpy as np
seed_value = 0
np.random.seed(seed_value)
model = VGG16(weights='imagenet',include_top=True)
model.layers.pop()
model.layers.pop()
for layer in model.layers:
layer.trainable=False
new_layer = Dense(2, activation='softmax',
kernel_initializer=keras.initializers.glorot_normal(seed=seed_value),
bias_initializer=keras.initializers.Zeros())
inp = model.input
out = new_layer(model.layers[-1].output)
model = Model(inp,out)