"accuracy" 的 Caffe 自定义 python 层

Caffe custom python layer for "accuracy"

我正在尝试制作自己的自定义 python 层来计算网络精度(在阶段:测试中使用)。

我的问题:它是否还应该具备这4个功能:

如果是,为什么?我只想在 TEST 阶段使用它来计算准确性,而不是在学习中使用它(Forward 和 Backward 似乎是为了训练)。

谢谢大家!

虽然我不确定如果您没有定义所有这四种方法,Caffe 是否会输出错误,但您肯定需要 SetupForward:

  • 设置: 正是您所说的。例如,在我的准确性层中,我通常会为我的整个测试集和每个样本的 softmax 概率保存一些指标(真和假 positives/negatives,f-scores),以防我想 [=44] =] 不同 networks/methods 以后。这是我打开文件的地方,我将在其中写入这些信息;
  • 前向: 这里是您计算准确度 本身 的地方,将预测与批次中每个样本的标签进行比较.通常这一层将有两个输入,标签(可能由 data/input 层提供的基本事实)和一个输出每个 class 批次中每个样本的 prediction/scores/probabilities 的层(我通常使用一个SoftMax Layer);
  • ReshapeBackward:不用担心这些。您无需担心向后传递,也无需重塑您的 blob。

这是一个精度层的例子:

# Remark: This class is designed for a binary problem with classes '0' and '1'
# Saving this file as accuracyLayer.py

import caffe
TRAIN = 0
TEST = 1

class Accuracy_Layer(caffe.Layer):
    #Setup method
    def setup(self, bottom, top):
        #We want two bottom blobs, the labels and the predictions
        if len(bottom) != 2:
            raise Exception("Wrong number of bottom blobs (prediction and label)") 

        #Initialize some attributes
        self.correctPredictions = 0.0
        self.totalImgs = 0

    #Forward method
    def forward(self, bottom, top):
        #The order of these depends on the prototxt definition
        predictions = bottom[0].data
        labels = bottom[1].data

        self.totalImgs += len(labels)

        for i in range(len(labels)): #len(labels) is equal to the batch size
                pred = predictions[i]   #pred is a tuple with the normalized probability 
                                        #of a sample i.r.t. two classes
                lab = labels[i]

                if pred[0] > pred[1]:   #this means it was predicted as class 0
                        if lab == 0.0:
                                self.correctPredictions += 1.0

                else:                  #else, predicted as class 1
                        if lab == 1.0:
                                self.correctPredictions += 1.0

        acc = correctPredictions / self.totalImgs

       #output data to top blob
       top[0].data = acc

    def reshape(self, bottom, top):
        """
        We don't need to reshape or instantiate anything that is input-size sensitive
        """
        pass

    def backward(self, bottom, top):
        """
        This layer does not back propagate
        """
        pass

以及您将如何在 prototxt 中定义它。在这里你会告诉 Caffe 这个层只会在 TEST 阶段出现:

layer {
  name: "metrics"
  type: "Python"
  top: "Acc"
  top: "FPR"
  top: "FNR"

  bottom: "prediction"   #let's suppose we have these two bottom blobs
  bottom: "label"

  python_param {
    module: "accuracyLayer"
    layer: "Accuracy_Layer"
  }
  include {
    phase: TEST.    #This will ensure it will only be executed in TEST phase
  }
}

顺便说一句,I've written a gist 有一个更复杂的准确性示例 python 层可能就是您要找的东西。