DLIB:训练 Shape_predictor 194 个地标(海伦数据集)
DLIB : Training Shape_predictor for 194 landmarks (helen dataset)
我正在使用 helen 数据集[=训练 DLIB 的 shape_predictor 的 194 个面部特征点 用于通过 dlib 库的 face_landmark_detection_ex.cpp
检测人脸标志。
现在它给了我一个 sp.dat
大约 45 MB 的二进制文件,与给定的文件 (http://sourceforge.net/projects/dclib/files/dlib/v18.10/shape_predictor_68_face_landmarks.dat.bz2) 相比,它有 68 个面部特征点。训练中
- 平均训练误差:0.0203811
- 平均测试误差:0.0204511
当我使用经过训练的数据来获取面部标志位置时,我得到的结果是..
这与从 68 个地标得到的结果有很大偏差
68 张地标图片:
为什么?
好的,看来您还没有阅读 code 评论 (?):
shape_predictor_trainer trainer;
// This algorithm has a bunch of parameters you can mess with. The
// documentation for the shape_predictor_trainer explains all of them.
// You should also read Kazemi's paper which explains all the parameters
// in great detail. However, here I'm just setting three of them
// differently than their default values. I'm doing this because we
// have a very small dataset. In particular, setting the oversampling
// to a high amount (300) effectively boosts the training set size, so
// that helps this example.
trainer.set_oversampling_amount(300);
// I'm also reducing the capacity of the model by explicitly increasing
// the regularization (making nu smaller) and by using trees with
// smaller depths.
trainer.set_nu(0.05);
trainer.set_tree_depth(2);
查看 Kazemi paper,ctrl-f 字符串 'parameter' 并阅读...
我正在使用 helen 数据集[=训练 DLIB 的 shape_predictor 的 194 个面部特征点 用于通过 dlib 库的 face_landmark_detection_ex.cpp
检测人脸标志。
现在它给了我一个 sp.dat
大约 45 MB 的二进制文件,与给定的文件 (http://sourceforge.net/projects/dclib/files/dlib/v18.10/shape_predictor_68_face_landmarks.dat.bz2) 相比,它有 68 个面部特征点。训练中
- 平均训练误差:0.0203811
- 平均测试误差:0.0204511
当我使用经过训练的数据来获取面部标志位置时,我得到的结果是..
这与从 68 个地标得到的结果有很大偏差
68 张地标图片:
为什么?
好的,看来您还没有阅读 code 评论 (?):
shape_predictor_trainer trainer;
// This algorithm has a bunch of parameters you can mess with. The
// documentation for the shape_predictor_trainer explains all of them.
// You should also read Kazemi's paper which explains all the parameters
// in great detail. However, here I'm just setting three of them
// differently than their default values. I'm doing this because we
// have a very small dataset. In particular, setting the oversampling
// to a high amount (300) effectively boosts the training set size, so
// that helps this example.
trainer.set_oversampling_amount(300);
// I'm also reducing the capacity of the model by explicitly increasing
// the regularization (making nu smaller) and by using trees with
// smaller depths.
trainer.set_nu(0.05);
trainer.set_tree_depth(2);
查看 Kazemi paper,ctrl-f 字符串 'parameter' 并阅读...