Caffe - 从 train_val.prototxt 创建 deploy.prototxt
Caffe - Create deploy.prototxt from train_val.prototxt
我已经在我的数据集上微调了 imagenet 预训练模型,以下是在 train_val.prototxt 中所做的相关更改(另外我还没有进行过采样,只是准备 hdf5 时的中心裁剪)
name: "MyCaffeNet"
layer {
type: "HDF5Data"
name: "data"
top: "X"
top: "Meta"
top: "Labels"
hdf5_data_param {
source: "/path/to/hdf5_train.txt"
batch_size: 50
}
include { phase: TRAIN }
}
layer {
type: "HDF5Data"
name: "data"
top: "X"
top: "Meta"
top: "Labels"
hdf5_data_param {
source: "/path/to/hdf5_test.txt"
batch_size: 50
}
include { phase: TEST }
}
layer {
name: "conv1"
type: "Convolution"
bottom: "X"
其余相同直到
layer {
name: "concat"
bottom: "fc7"
bottom: "Meta"
top: "combined"
type: "Concat"
concat_param {
concat_dim: 1
}
}
layer {
name: "my-fc8"
type: "InnerProduct"
bottom: "combined"
top: "my-fc8"
# lr_mult is set to higher than for other layers, because this layer is starting from random while the others are already trained
param {
lr_mult: 10
decay_mult: 1
}
param {
lr_mult: 20
decay_mult: 0
}
inner_product_param {
num_output: 4098
weight_filler {
type: "gaussian"
std: 0.01
}
bias_filler {
type: "constant"
value: 0
}
}
}
layer {
name: "my-fc9"
type: "InnerProduct"
bottom: "my-fc8"
top: "my-fc9"
# lr_mult is set to higher than for other layers, because this layer is starting from random while the others are already trained
param {
lr_mult: 10
decay_mult: 1
}
param {
lr_mult: 20
decay_mult: 0
}
inner_product_param {
num_output: 1
weight_filler {
type: "gaussian"
std: 0.01
}
bias_filler {
type: "constant"
value: 0
}
}
}
layer {
name: "loss"
type: "EuclideanLoss"
bottom: "my-fc9"
bottom: "Labels"
}
layer {
name: "loss"
type: "EuclideanLoss"
bottom: "my-fc9"
bottom: "Labels"
top: "loss"
include {
phase: TEST
}
}
问题是我不知道如何修改我的 deploy.prototxt 和我的特征提取代码(如下所示)来测试这个模型,现在使用额外的元信息作为输入特征
for i=1:n
im = fgetl(file_list);
im = imread(im);
input_data = {prepare_image(im)};
scores = caffe('forward', input_data);
scores_original=scores;
scores = scores{1};
scores = squeeze(scores);
end
此处 prepare_image 将 rgb 转换为 bgr、置换和居中裁剪,基本上是预处理。
所以,归结为我应该如何修改'caffe('forward', input_data)';并制作一个 deploy.prototxt 以便在测试 caffe 时也提供 Meta (n*2) 功能。感谢您的耐心等待和帮助!
我觉得deploy.prototxt开头应该是这样的
name: "MyCaffeNet"
input: "X"
input_dim: 1
input_dim: 3
input_dim: 227
input_dim: 227
input: "Meta"
input_dim: 1
input_dim: 1
input_dim: 1
input_dim: 2 # two additional 'features' per image
变量input_data可以是一个4维斑点向量。函数 prepare_data 给出一个 4D 图像数据块,类似地,我们可以将元数据或任何其他类型的输入充分整形为 4D 块,然后将其传递给前向函数。
input_data = {prepare_data(im),prepare_meta()};
scores = caffe('forward', input_data')
;
我已经在我的数据集上微调了 imagenet 预训练模型,以下是在 train_val.prototxt 中所做的相关更改(另外我还没有进行过采样,只是准备 hdf5 时的中心裁剪)
name: "MyCaffeNet"
layer {
type: "HDF5Data"
name: "data"
top: "X"
top: "Meta"
top: "Labels"
hdf5_data_param {
source: "/path/to/hdf5_train.txt"
batch_size: 50
}
include { phase: TRAIN }
}
layer {
type: "HDF5Data"
name: "data"
top: "X"
top: "Meta"
top: "Labels"
hdf5_data_param {
source: "/path/to/hdf5_test.txt"
batch_size: 50
}
include { phase: TEST }
}
layer {
name: "conv1"
type: "Convolution"
bottom: "X"
其余相同直到
layer {
name: "concat"
bottom: "fc7"
bottom: "Meta"
top: "combined"
type: "Concat"
concat_param {
concat_dim: 1
}
}
layer {
name: "my-fc8"
type: "InnerProduct"
bottom: "combined"
top: "my-fc8"
# lr_mult is set to higher than for other layers, because this layer is starting from random while the others are already trained
param {
lr_mult: 10
decay_mult: 1
}
param {
lr_mult: 20
decay_mult: 0
}
inner_product_param {
num_output: 4098
weight_filler {
type: "gaussian"
std: 0.01
}
bias_filler {
type: "constant"
value: 0
}
}
}
layer {
name: "my-fc9"
type: "InnerProduct"
bottom: "my-fc8"
top: "my-fc9"
# lr_mult is set to higher than for other layers, because this layer is starting from random while the others are already trained
param {
lr_mult: 10
decay_mult: 1
}
param {
lr_mult: 20
decay_mult: 0
}
inner_product_param {
num_output: 1
weight_filler {
type: "gaussian"
std: 0.01
}
bias_filler {
type: "constant"
value: 0
}
}
}
layer {
name: "loss"
type: "EuclideanLoss"
bottom: "my-fc9"
bottom: "Labels"
}
layer {
name: "loss"
type: "EuclideanLoss"
bottom: "my-fc9"
bottom: "Labels"
top: "loss"
include {
phase: TEST
}
}
问题是我不知道如何修改我的 deploy.prototxt 和我的特征提取代码(如下所示)来测试这个模型,现在使用额外的元信息作为输入特征
for i=1:n
im = fgetl(file_list);
im = imread(im);
input_data = {prepare_image(im)};
scores = caffe('forward', input_data);
scores_original=scores;
scores = scores{1};
scores = squeeze(scores);
end
此处 prepare_image 将 rgb 转换为 bgr、置换和居中裁剪,基本上是预处理。
所以,归结为我应该如何修改'caffe('forward', input_data)';并制作一个 deploy.prototxt 以便在测试 caffe 时也提供 Meta (n*2) 功能。感谢您的耐心等待和帮助!
我觉得deploy.prototxt开头应该是这样的
name: "MyCaffeNet"
input: "X"
input_dim: 1
input_dim: 3
input_dim: 227
input_dim: 227
input: "Meta"
input_dim: 1
input_dim: 1
input_dim: 1
input_dim: 2 # two additional 'features' per image
变量input_data可以是一个4维斑点向量。函数 prepare_data 给出一个 4D 图像数据块,类似地,我们可以将元数据或任何其他类型的输入充分整形为 4D 块,然后将其传递给前向函数。
input_data = {prepare_data(im),prepare_meta()};
scores = caffe('forward', input_data')
;