什么时候停止咖啡训练?
When to stop training in caffe?
我正在使用 bvlc_reference_caffenet 进行训练。我正在做培训和测试。下面是我训练的网络的示例日志:
I0430 11:49:08.408740 23343 data_layer.cpp:73] Restarting data prefetching from start.
I0430 11:49:21.221074 23343 data_layer.cpp:73] Restarting data prefetching from start.
I0430 11:49:34.038710 23343 data_layer.cpp:73] Restarting data prefetching from start.
I0430 11:49:46.816813 23343 data_layer.cpp:73] Restarting data prefetching from start.
I0430 11:49:56.630870 23334 solver.cpp:397] Test net output #0: accuracy = 0.932502
I0430 11:49:56.630940 23334 solver.cpp:397] Test net output #1: loss = 0.388662 (* 1 = 0.388662 loss)
I0430 11:49:57.218236 23334 solver.cpp:218] Iteration 71000 (0.319361 iter/s, 62.625s/20 iters), loss = 0.00146191
I0430 11:49:57.218300 23334 solver.cpp:237] Train net output #0: loss = 0.00146191 (* 1 = 0.00146191 loss)
I0430 11:49:57.218308 23334 sgd_solver.cpp:105] Iteration 71000, lr = 0.001
I0430 11:50:09.168726 23334 solver.cpp:218] Iteration 71020 (1.67357 iter/s, 11.9505s/20 iters), loss = 0.000806865
I0430 11:50:09.168778 23334 solver.cpp:237] Train net output #0: loss = 0.000806868 (* 1 = 0.000806868 loss)
I0430 11:50:09.168787 23334 sgd_solver.cpp:105] Iteration 71020, lr = 0.001
I0430 11:50:21.127496 23334 solver.cpp:218] Iteration 71040 (1.67241 iter/s, 11.9588s/20 iters), loss = 0.000182312
I0430 11:50:21.127539 23334 solver.cpp:237] Train net output #0: loss = 0.000182314 (* 1 = 0.000182314 loss)
I0430 11:50:21.127562 23334 sgd_solver.cpp:105] Iteration 71040, lr = 0.001
I0430 11:50:33.248086 23334 solver.cpp:218] Iteration 71060 (1.65009 iter/s, 12.1206s/20 iters), loss = 0.000428604
I0430 11:50:33.248260 23334 solver.cpp:237] Train net output #0: loss = 0.000428607 (* 1 = 0.000428607 loss)
I0430 11:50:33.248272 23334 sgd_solver.cpp:105] Iteration 71060, lr = 0.001
I0430 11:50:45.518955 23334 solver.cpp:218] Iteration 71080 (1.62989 iter/s, 12.2707s/20 iters), loss = 0.00108446
I0430 11:50:45.519006 23334 solver.cpp:237] Train net output #0: loss = 0.00108447 (* 1 = 0.00108447 loss)
I0430 11:50:45.519011 23334 sgd_solver.cpp:105] Iteration 71080, lr = 0.001
I0430 11:50:51.287315 23341 data_layer.cpp:73] Restarting data prefetching from start.
I0430 11:50:57.851781 23334 solver.cpp:218] Iteration 71100 (1.62169 iter/s, 12.3328s/20 iters), loss = 0.00150949
I0430 11:50:57.851828 23334 solver.cpp:237] Train net output #0: loss = 0.0015095 (* 1 = 0.0015095 loss)
I0430 11:50:57.851837 23334 sgd_solver.cpp:105] Iteration 71100, lr = 0.001
I0430 11:51:09.912184 23334 solver.cpp:218] Iteration 71120 (1.65832 iter/s, 12.0604s/20 iters), loss = 0.00239335
I0430 11:51:09.912330 23334 solver.cpp:237] Train net output #0: loss = 0.00239335 (* 1 = 0.00239335 loss)
I0430 11:51:09.912340 23334 sgd_solver.cpp:105] Iteration 71120, lr = 0.001
I0430 11:51:21.968586 23334 solver.cpp:218] Iteration 71140 (1.65888 iter/s, 12.0563s/20 iters), loss = 0.00161807
I0430 11:51:21.968646 23334 solver.cpp:237] Train net output #0: loss = 0.00161808 (* 1 = 0.00161808 loss)
I0430 11:51:21.968654 23334 sgd_solver.cpp:105] Iteration 71140, lr = 0.001
让我困惑的是损失。当损失低于 0.0001 时,我打算停止训练我的网络,但有两个损失:训练损失和测试损失。训练损失似乎保持在 0.0001 左右,但测试损失为 0.388,远高于我设定的阈值。我用哪一个来停止我的训练?
测试和训练性能之间存在如此大的差距可能表明您 over-fit 您的数据。
验证集的目的是确保您不会过度拟合。您应该使用验证集上的性能来决定是停止训练还是领先。
通常,您希望在验证准确性达到稳定水平时停止训练。您上面的数据表明您确实过度训练了您的模型。
理想情况下,训练、测试和验证误差应该大致相等。实际上,这种情况很少发生。
请注意,损失 不是 一个好的指标,除非您的损失函数和权重在所有评估阶段都相同。例如,GoogleNet 对三层训练损失函数进行加权,但验证测试只关心最终准确性。
我正在使用 bvlc_reference_caffenet 进行训练。我正在做培训和测试。下面是我训练的网络的示例日志:
I0430 11:49:08.408740 23343 data_layer.cpp:73] Restarting data prefetching from start.
I0430 11:49:21.221074 23343 data_layer.cpp:73] Restarting data prefetching from start.
I0430 11:49:34.038710 23343 data_layer.cpp:73] Restarting data prefetching from start.
I0430 11:49:46.816813 23343 data_layer.cpp:73] Restarting data prefetching from start.
I0430 11:49:56.630870 23334 solver.cpp:397] Test net output #0: accuracy = 0.932502
I0430 11:49:56.630940 23334 solver.cpp:397] Test net output #1: loss = 0.388662 (* 1 = 0.388662 loss)
I0430 11:49:57.218236 23334 solver.cpp:218] Iteration 71000 (0.319361 iter/s, 62.625s/20 iters), loss = 0.00146191
I0430 11:49:57.218300 23334 solver.cpp:237] Train net output #0: loss = 0.00146191 (* 1 = 0.00146191 loss)
I0430 11:49:57.218308 23334 sgd_solver.cpp:105] Iteration 71000, lr = 0.001
I0430 11:50:09.168726 23334 solver.cpp:218] Iteration 71020 (1.67357 iter/s, 11.9505s/20 iters), loss = 0.000806865
I0430 11:50:09.168778 23334 solver.cpp:237] Train net output #0: loss = 0.000806868 (* 1 = 0.000806868 loss)
I0430 11:50:09.168787 23334 sgd_solver.cpp:105] Iteration 71020, lr = 0.001
I0430 11:50:21.127496 23334 solver.cpp:218] Iteration 71040 (1.67241 iter/s, 11.9588s/20 iters), loss = 0.000182312
I0430 11:50:21.127539 23334 solver.cpp:237] Train net output #0: loss = 0.000182314 (* 1 = 0.000182314 loss)
I0430 11:50:21.127562 23334 sgd_solver.cpp:105] Iteration 71040, lr = 0.001
I0430 11:50:33.248086 23334 solver.cpp:218] Iteration 71060 (1.65009 iter/s, 12.1206s/20 iters), loss = 0.000428604
I0430 11:50:33.248260 23334 solver.cpp:237] Train net output #0: loss = 0.000428607 (* 1 = 0.000428607 loss)
I0430 11:50:33.248272 23334 sgd_solver.cpp:105] Iteration 71060, lr = 0.001
I0430 11:50:45.518955 23334 solver.cpp:218] Iteration 71080 (1.62989 iter/s, 12.2707s/20 iters), loss = 0.00108446
I0430 11:50:45.519006 23334 solver.cpp:237] Train net output #0: loss = 0.00108447 (* 1 = 0.00108447 loss)
I0430 11:50:45.519011 23334 sgd_solver.cpp:105] Iteration 71080, lr = 0.001
I0430 11:50:51.287315 23341 data_layer.cpp:73] Restarting data prefetching from start.
I0430 11:50:57.851781 23334 solver.cpp:218] Iteration 71100 (1.62169 iter/s, 12.3328s/20 iters), loss = 0.00150949
I0430 11:50:57.851828 23334 solver.cpp:237] Train net output #0: loss = 0.0015095 (* 1 = 0.0015095 loss)
I0430 11:50:57.851837 23334 sgd_solver.cpp:105] Iteration 71100, lr = 0.001
I0430 11:51:09.912184 23334 solver.cpp:218] Iteration 71120 (1.65832 iter/s, 12.0604s/20 iters), loss = 0.00239335
I0430 11:51:09.912330 23334 solver.cpp:237] Train net output #0: loss = 0.00239335 (* 1 = 0.00239335 loss)
I0430 11:51:09.912340 23334 sgd_solver.cpp:105] Iteration 71120, lr = 0.001
I0430 11:51:21.968586 23334 solver.cpp:218] Iteration 71140 (1.65888 iter/s, 12.0563s/20 iters), loss = 0.00161807
I0430 11:51:21.968646 23334 solver.cpp:237] Train net output #0: loss = 0.00161808 (* 1 = 0.00161808 loss)
I0430 11:51:21.968654 23334 sgd_solver.cpp:105] Iteration 71140, lr = 0.001
让我困惑的是损失。当损失低于 0.0001 时,我打算停止训练我的网络,但有两个损失:训练损失和测试损失。训练损失似乎保持在 0.0001 左右,但测试损失为 0.388,远高于我设定的阈值。我用哪一个来停止我的训练?
测试和训练性能之间存在如此大的差距可能表明您 over-fit 您的数据。
验证集的目的是确保您不会过度拟合。您应该使用验证集上的性能来决定是停止训练还是领先。
通常,您希望在验证准确性达到稳定水平时停止训练。您上面的数据表明您确实过度训练了您的模型。
理想情况下,训练、测试和验证误差应该大致相等。实际上,这种情况很少发生。
请注意,损失 不是 一个好的指标,除非您的损失函数和权重在所有评估阶段都相同。例如,GoogleNet 对三层训练损失函数进行加权,但验证测试只关心最终准确性。