使用迁移学习时如何使用增强数据?
How to use augumented data when using transfer learning?
我使用 VGG16 进行迁移学习,但准确率很低。使用迁移学习时是否可以使用数据增强技术来提高准确性?
Following is the code for better understanding:
# Show the image paths
train_path = 'myNetDB/train' # Relative Path
valid_path = 'myNetDB/valid'
test_path = 'myNetDB/test'
train_batches = ImageDataGenerator().flow_from_directory(train_path, target_size=(224, 224), classes=['dog', 'cat'], batch_size=10)
valid_batches = ImageDataGenerator().flow_from_directory(valid_path, target_size=(224, 224), classes=['dog', 'cat'], batch_size=4)
test_batches = ImageDataGenerator().flow_from_directory(test_path, target_size=(224, 224), classes=['dog', 'cat'], batch_size=10)
vgg16_model= load_model('Fetched_VGG.h5')
# transform the model to Sequential
model= Sequential()
for layer in vgg16_model.layers[:-1]:
model.add(layer)
# Freezing the layers (Oppose weights to be updated)
for layer in model.layers:
layer.trainable = False
# adding the last layer
model.add(Dense(2, activation='softmax'))
model.compile(Adam(lr=.0001), loss='categorical_crossentropy', metrics=['accuracy'])
model.fit_generator(train_batches, steps_per_epoch=4,
validation_data=valid_batches, validation_steps=4, epochs=5, verbose=2)
predictions = model.predict_generator(test_batches, steps=1, verbose=0)
如果你的准确率很低,可能是因为你的数据集与训练 VGG16 的数据集有很大不同。有两种可能:
您的数据集足够大,您可以从预训练的权重开始训练您的模型。
您的数据集很小。在这种情况下,没有捷径。您应该考虑一个比 VGG16 更简单的模型,这样您就不太可能发生过度拟合。
在这两种情况下,为了回答您的问题,是的,有意识地进行增强技术有助于提高准确性。
我使用 VGG16 进行迁移学习,但准确率很低。使用迁移学习时是否可以使用数据增强技术来提高准确性?
Following is the code for better understanding:
# Show the image paths
train_path = 'myNetDB/train' # Relative Path
valid_path = 'myNetDB/valid'
test_path = 'myNetDB/test'
train_batches = ImageDataGenerator().flow_from_directory(train_path, target_size=(224, 224), classes=['dog', 'cat'], batch_size=10)
valid_batches = ImageDataGenerator().flow_from_directory(valid_path, target_size=(224, 224), classes=['dog', 'cat'], batch_size=4)
test_batches = ImageDataGenerator().flow_from_directory(test_path, target_size=(224, 224), classes=['dog', 'cat'], batch_size=10)
vgg16_model= load_model('Fetched_VGG.h5')
# transform the model to Sequential
model= Sequential()
for layer in vgg16_model.layers[:-1]:
model.add(layer)
# Freezing the layers (Oppose weights to be updated)
for layer in model.layers:
layer.trainable = False
# adding the last layer
model.add(Dense(2, activation='softmax'))
model.compile(Adam(lr=.0001), loss='categorical_crossentropy', metrics=['accuracy'])
model.fit_generator(train_batches, steps_per_epoch=4,
validation_data=valid_batches, validation_steps=4, epochs=5, verbose=2)
predictions = model.predict_generator(test_batches, steps=1, verbose=0)
如果你的准确率很低,可能是因为你的数据集与训练 VGG16 的数据集有很大不同。有两种可能:
您的数据集足够大,您可以从预训练的权重开始训练您的模型。
您的数据集很小。在这种情况下,没有捷径。您应该考虑一个比 VGG16 更简单的模型,这样您就不太可能发生过度拟合。
在这两种情况下,为了回答您的问题,是的,有意识地进行增强技术有助于提高准确性。