Caffe - Average precision of the other N last iterations

I am training a neural network using Caffe. In the file, solver.prototxt

I can set average_loss

to print the average loss over the last N iterations. Can this be done with other values ​​as well?

For example, I wrote a custom PythonLayer output precision and I would like to display the average precision over the last N iterations as well.

Thank,

EDIT : here is the log. Lines DEBUG

indicate the accuracy of calculated in each image, and each image 3 ( average_loss: 3

and display: 3

) with the loss of accuracy is displayed. We can see that only the last is displayed, what I want is the average 3).

2018-04-24 10:38:06,383 [DEBUG]: Accuracy: 0 / 524288 = 0.000000
I0424 10:38:07.517436 99964 solver.cpp:251] Iteration 0, loss = 1.84883e+06
I0424 10:38:07.517503 99964 solver.cpp:267]     Train net output #0: accuracy = 0
I0424 10:38:07.517521 99964 solver.cpp:267]     Train net output #1: loss = 1.84883e+06 (* 1 = 1.84883e+06 loss)
I0424 10:38:07.517536 99964 sgd_solver.cpp:106] Iteration 0, lr = 2e-12
I0424 10:38:07.524904 99964 solver.cpp:287]     Time: 2.44301s/1iters
2018-04-24 10:38:08,653 [DEBUG]: Accuracy: 28569 / 524288 = 0.054491
2018-04-24 10:38:11,010 [DEBUG]: Accuracy: 22219 / 524288 = 0.042379
2018-04-24 10:38:13,326 [DEBUG]: Accuracy: 168424 / 524288 = 0.321243
I0424 10:38:14.533329 99964 solver.cpp:251] Iteration 3, loss = 1.84855e+06
I0424 10:38:14.533406 99964 solver.cpp:267]     Train net output #0: accuracy = 0.321243
I0424 10:38:14.533426 99964 solver.cpp:267]     Train net output #1: loss = 1.84833e+06 (* 1 = 1.84833e+06 loss)
I0424 10:38:14.533440 99964 sgd_solver.cpp:106] Iteration 3, lr = 2e-12
I0424 10:38:14.534195 99964 solver.cpp:287]     Time: 7.01088s/3iters
2018-04-24 10:38:15,665 [DEBUG]: Accuracy: 219089 / 524288 = 0.417879
2018-04-24 10:38:17,943 [DEBUG]: Accuracy: 202896 / 524288 = 0.386993
2018-04-24 10:38:20,210 [DEBUG]: Accuracy: 0 / 524288 = 0.000000
I0424 10:38:21.393121 99964 solver.cpp:251] Iteration 6, loss = 1.84769e+06
I0424 10:38:21.393190 99964 solver.cpp:267]     Train net output #0: accuracy = 0
I0424 10:38:21.393210 99964 solver.cpp:267]     Train net output #1: loss = 1.84816e+06 (* 1 = 1.84816e+06 loss)
I0424 10:38:21.393224 99964 sgd_solver.cpp:106] Iteration 6, lr = 2e-12
I0424 10:38:21.393940 99964 solver.cpp:287]     Time: 6.85962s/3iters
2018-04-24 10:38:22,529 [DEBUG]: Accuracy: 161180 / 524288 = 0.307426
2018-04-24 10:38:24,801 [DEBUG]: Accuracy: 178021 / 524288 = 0.339548
2018-04-24 10:38:27,090 [DEBUG]: Accuracy: 208571 / 524288 = 0.397818
I0424 10:38:28.297776 99964 solver.cpp:251] Iteration 9, loss = 1.84482e+06
I0424 10:38:28.297843 99964 solver.cpp:267]     Train net output #0: accuracy = 0.397818
I0424 10:38:28.297863 99964 solver.cpp:267]     Train net output #1: loss = 1.84361e+06 (* 1 = 1.84361e+06 loss)
I0424 10:38:28.297878 99964 sgd_solver.cpp:106] Iteration 9, lr = 2e-12
I0424 10:38:28.298607 99964 solver.cpp:287]     Time: 6.9049s/3iters
I0424 10:38:28.331749 99964 solver.cpp:506] Snapshotting to binary proto file snapshot/train_iter_10.caffemodel
I0424 10:38:36.171842 99964 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot/train_iter_10.solverstate
I0424 10:38:43.068686 99964 solver.cpp:362] Optimization Done.

      

0


source to share


1 answer


Caffe only averages the average_loss

global network loss (weighted sum of all loss levels) over the iteration, reporting only the last batch output for all other output blobs.

So if you want your Python level to report the precision averaged over multiple iterations, I suggest you store the SS buffer as a member of your layer class and display that aggregated value.
Alternatively, you can implement a "moving average" at the top of the precision calculation and output that value as "top".

You can have a "middle level output" implemented in python. This layer can take any amount of "bottoms" and display a moving average of those bottoms.

Layer Python code:

import caffe
class MovingAverageLayer(caffe.Layer):
  def setup(self, bottom, top):
    assert len(bottom) == len(top), "layer must have same number of inputs and outputs"
    # average over how many iterations? read from param_str
    self.buf_size = int(self.param_str)
    # allocate a buffer for each "bottom"
    self.buf = [[] for _ in self.bottom]

  def reshape(self, bottom, top):
    # make sure inputs and outputs have the same size
    for i, b in enumerate(bottom):
      top[i].reshape(*b.shape)

  def forward(self, bottom, top):
    # put into buffers
    for i, b in enumerate(bottom):
      self.buf[i].append(b.data.copy())
      if len(self.buf[i]) > self.buf_size:
        self.buf[i].pop(0)
      # compute average
      a = 0
      for elem in self.buf[i]:
        a += elem
      top[i].data[...] = a / len(self.buf[i])

  def backward(self, top, propagate_down, bottom):
    # this layer does not back prop
    pass

      



How to use this layer in prototxt:

layer {
  name: "moving_ave"
  type: "Python"
  bottom: "accuracy"
  top: "av_accuracy"
  python_param {
    layer: "MovingAverageLayer"
    module: "path.to.module"
    param_str: "30"  # buf size 
  }
}

      

See this tutorial for details .


Original wrong answer:
Caffe outputs to log all network outputs: loss, precision, or any other blob that appears as a "top"

layer and is not used as a "bottom" at any other level.
Therefore, if you want to see the precision computed by a layer "Python"

, just make sure no other layer is using that precision as input.

+1


source







All Articles