Caffe - 其他 N 次最后迭代的平均准确度

Caffe - Average accuracy other N last iterations

我正在使用 Caffe 训练神经网络。在 solver.prototxt 文件中,我可以设置 average_loss 来打印最后 N 次迭代的平均损失。是否也可以使用其他值来做到这一点?

例如,我写了一个自定义的PythonLayer输出精度,我也想显示最后N次迭代的平均精度。

谢谢,

编辑:这是日志。 DEBUG 行显示每张图像计算的准确度,每 3 张图像(average_loss: 3display: 3)显示准确度和损失。我们看到只显示最后一个,我要的是3个的平均值)。

2018-04-24 10:38:06,383 [DEBUG]: Accuracy: 0 / 524288 = 0.000000
I0424 10:38:07.517436 99964 solver.cpp:251] Iteration 0, loss = 1.84883e+06
I0424 10:38:07.517503 99964 solver.cpp:267]     Train net output #0: accuracy = 0
I0424 10:38:07.517521 99964 solver.cpp:267]     Train net output #1: loss = 1.84883e+06 (* 1 = 1.84883e+06 loss)
I0424 10:38:07.517536 99964 sgd_solver.cpp:106] Iteration 0, lr = 2e-12
I0424 10:38:07.524904 99964 solver.cpp:287]     Time: 2.44301s/1iters
2018-04-24 10:38:08,653 [DEBUG]: Accuracy: 28569 / 524288 = 0.054491
2018-04-24 10:38:11,010 [DEBUG]: Accuracy: 22219 / 524288 = 0.042379
2018-04-24 10:38:13,326 [DEBUG]: Accuracy: 168424 / 524288 = 0.321243
I0424 10:38:14.533329 99964 solver.cpp:251] Iteration 3, loss = 1.84855e+06
I0424 10:38:14.533406 99964 solver.cpp:267]     Train net output #0: accuracy = 0.321243
I0424 10:38:14.533426 99964 solver.cpp:267]     Train net output #1: loss = 1.84833e+06 (* 1 = 1.84833e+06 loss)
I0424 10:38:14.533440 99964 sgd_solver.cpp:106] Iteration 3, lr = 2e-12
I0424 10:38:14.534195 99964 solver.cpp:287]     Time: 7.01088s/3iters
2018-04-24 10:38:15,665 [DEBUG]: Accuracy: 219089 / 524288 = 0.417879
2018-04-24 10:38:17,943 [DEBUG]: Accuracy: 202896 / 524288 = 0.386993
2018-04-24 10:38:20,210 [DEBUG]: Accuracy: 0 / 524288 = 0.000000
I0424 10:38:21.393121 99964 solver.cpp:251] Iteration 6, loss = 1.84769e+06
I0424 10:38:21.393190 99964 solver.cpp:267]     Train net output #0: accuracy = 0
I0424 10:38:21.393210 99964 solver.cpp:267]     Train net output #1: loss = 1.84816e+06 (* 1 = 1.84816e+06 loss)
I0424 10:38:21.393224 99964 sgd_solver.cpp:106] Iteration 6, lr = 2e-12
I0424 10:38:21.393940 99964 solver.cpp:287]     Time: 6.85962s/3iters
2018-04-24 10:38:22,529 [DEBUG]: Accuracy: 161180 / 524288 = 0.307426
2018-04-24 10:38:24,801 [DEBUG]: Accuracy: 178021 / 524288 = 0.339548
2018-04-24 10:38:27,090 [DEBUG]: Accuracy: 208571 / 524288 = 0.397818
I0424 10:38:28.297776 99964 solver.cpp:251] Iteration 9, loss = 1.84482e+06
I0424 10:38:28.297843 99964 solver.cpp:267]     Train net output #0: accuracy = 0.397818
I0424 10:38:28.297863 99964 solver.cpp:267]     Train net output #1: loss = 1.84361e+06 (* 1 = 1.84361e+06 loss)
I0424 10:38:28.297878 99964 sgd_solver.cpp:106] Iteration 9, lr = 2e-12
I0424 10:38:28.298607 99964 solver.cpp:287]     Time: 6.9049s/3iters
I0424 10:38:28.331749 99964 solver.cpp:506] Snapshotting to binary proto file snapshot/train_iter_10.caffemodel
I0424 10:38:36.171842 99964 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot/train_iter_10.solverstate
I0424 10:38:43.068686 99964 solver.cpp:362] Optimization Done.

Caffe 仅在 average_loss 次迭代中平均网络的全局损失(所有损失层的加权和),同时仅报告所有其他输出 blob 的最后一批的输出。

因此,如果您希望 Python 层报告多次迭代的平均准确度,我建议您将缓冲区 SS 存储在层 class 的成员中并显示此聚合值。
或者,您可以在精度计算之上实施 "moving average" 并将此值输出为 "top".

您可以在 python 中实现 "moving average output layer"。 该层可以取任意数量的 "bottoms" 并输出这些底部的移动平均线。

Python层代码:

import caffe
class MovingAverageLayer(caffe.Layer):
  def setup(self, bottom, top):
    assert len(bottom) == len(top), "layer must have same number of inputs and outputs"
    # average over how many iterations? read from param_str
    self.buf_size = int(self.param_str)
    # allocate a buffer for each "bottom"
    self.buf = [[] for _ in self.bottom]

  def reshape(self, bottom, top):
    # make sure inputs and outputs have the same size
    for i, b in enumerate(bottom):
      top[i].reshape(*b.shape)

  def forward(self, bottom, top):
    # put into buffers
    for i, b in enumerate(bottom):
      self.buf[i].append(b.data.copy())
      if len(self.buf[i]) > self.buf_size:
        self.buf[i].pop(0)
      # compute average
      a = 0
      for elem in self.buf[i]:
        a += elem
      top[i].data[...] = a / len(self.buf[i])

  def backward(self, top, propagate_down, bottom):
    # this layer does not back prop
    pass

如何在 prototxt 中使用该图层:

layer {
  name: "moving_ave"
  type: "Python"
  bottom: "accuracy"
  top: "av_accuracy"
  python_param {
    layer: "MovingAverageLayer"
    module: "path.to.module"
    param_str: "30"  # buf size 
  }
}

有关详细信息,请参阅


原错误答案:
Caffe 输出记录任何网络输出:损失、准确性或任何其他显示为层的 "top" 且未在任何其他层中用作 "bottom" 的斑点。
因此,如果您想查看 "Python" 层计算的精度,只需确保没有其他层使用此精度作为输入。