Caffe:检查失败:outer_num_ * inner_num_ == bottom[1]->count()(10 对 60)标签数量必须与预测数量匹配

Caffe: Check failed: outer_num_ * inner_num_ == bottom[1]->count() (10 vs. 60) Number of labels must match number of predictions

我正在尝试为多标签回归任务微调 Alexnet。为此,我将产生 1000 个标签输出(用于图像分类任务)的最后一层替换为 6 个标签输出,这为我提供了 6 个浮点数。我替换了最后一层 as mentioned here.

我的训练数据以 h5 格式准备,data 的形状为 (11000, 3, 544, 1024),labels 的形状为 (11000, 1, 6) .在 Caffe 库中重新训练 Alexnet 的权重时,出现以下错误:

I1013 10:50:49.759560  3107 net.cpp:139] Memory required for data: 950676640

I1013 10:50:49.759562  3107 layer_factory.hpp:77] Creating layer accuracy_retrain

I1013 10:50:49.759567  3107 net.cpp:86] Creating Layer accuracy_retrain

I1013 10:50:49.759568  3107 net.cpp:408] accuracy_retrain <- fc8_fc8_retrain_0_split_0

I1013 10:50:49.759572  3107 net.cpp:408] accuracy_retrain <- label_data_1_split_0

I1013 10:50:49.759575  3107 net.cpp:382] accuracy_retrain -> accuracy

F1013 10:50:49.759587  3107 accuracy_layer.cpp:31] Check failed: outer_num_ * inner_num_ == bottom[1]->count() (10 vs. 60) Number of labels must match number of predictions; e.g., if label axis == 1 and prediction shape is (N, C, H, W), label count (number of labels) must be N*H*W, with integer values in {0, 1, ..., C-1}.

我的训练和测试阶段的 Batchsize 都是 10。错误出现在测试阶段,可能在accuracyComplete Error Log here。我不确定为什么会出现这个问题,可能是我的 label 格式不正确。在这方面的任何帮助将不胜感激。

我解决了这个问题。似乎 accuracy 层仅与 SoftmaxWithLoss 层一起用于分类任务。如所述,EuclideanLoss可用于测试回归网络。