OpenCV 3.1 人工神经网络预测 returns 南

OpenCV 3.1 ANN predict returns nan

我正在尝试使用 OpenCV ANN 库实现神经网络。我有一个可行的解决方案,但在升级到 OpenCV 3.1 后它停止工作了。所以我创建了一个简化的测试代码,但问题仍然存在。 ANN 已成功训练,但是当我尝试使用来自 trainData 的行调用预测时,它 returns Mat of nan 值。密码是

cv::Ptr< cv::ml::ANN_MLP > nn = cv::ml::ANN_MLP::create();
nn->setActivationFunction(cv::ml::ANN_MLP::SIGMOID_SYM);
nn->setTrainMethod(cv::ml::ANN_MLP::BACKPROP);
nn->setBackpropMomentumScale(0.1);
nn->setBackpropWeightScale(0.1);
nn->setTermCriteria(cv::TermCriteria(cv::TermCriteria::MAX_ITER, (int)100000, 1e-6));

cv::Mat trainData(15, 4, CV_32FC1);
trainData.at<float>(0, 0) = 5.5f; trainData.at<float>(0, 1) = 3.5f; trainData.at<float>(0, 2) = 1.3f; trainData.at<float>(0, 3) = 0.2f;
trainData.at<float>(1, 0) = 6.5f; trainData.at<float>(1, 1) = 2.8f; trainData.at<float>(1, 2) = 4.5999999f; trainData.at<float>(1, 3) = 1.5f;
trainData.at<float>(2, 0) = 6.3000002f; trainData.at<float>(2, 1) = 2.3f; trainData.at<float>(2, 2) = 4.4000001f; trainData.at<float>(2, 3) = 1.3f;
trainData.at<float>(3, 0) = 6.0f; trainData.at<float>(3, 1) = 2.2f; trainData.at<float>(3, 2) = 4.0f; trainData.at<float>(3, 3) = 1.0f;
trainData.at<float>(4, 0) = 4.5999999f; trainData.at<float>(4, 1) = 3.0999999f; trainData.at<float>(4, 2) = 1.5f; trainData.at<float>(4, 3) = 0.2f;
trainData.at<float>(5, 0) = 5.0f; trainData.at<float>(5, 1) = 3.2f; trainData.at<float>(5, 2) = 1.2f; trainData.at<float>(5, 3) = 0.2f;
trainData.at<float>(6, 0) = 7.4000001f; trainData.at<float>(6, 1) = 2.8f; trainData.at<float>(6, 2) = 6.0999999f; trainData.at<float>(6, 3) = 1.9f;
trainData.at<float>(7, 0) = 6.0f; trainData.at<float>(7, 1) = 2.9000001f; trainData.at<float>(7, 2) = 4.5f; trainData.at<float>(7, 3) = 1.5f;
trainData.at<float>(8, 0) = 5.0f; trainData.at<float>(8, 1) = 3.4000001f; trainData.at<float>(8, 2) = 1.5f; trainData.at<float>(8, 3) = 0.2f;
trainData.at<float>(9, 0) = 6.4000001f; trainData.at<float>(9, 1) = 2.9000001f; trainData.at<float>(9, 2) = 4.3000002f; trainData.at<float>(9, 3) = 1.3f;
trainData.at<float>(10, 0) = 7.1999998f; trainData.at<float>(10, 1) = 3.5999999f; trainData.at<float>(10, 2) = 6.0999999f; trainData.at<float>(10, 3) = 2.5f;
trainData.at<float>(11, 0) = 5.0999999f; trainData.at<float>(11, 1) = 3.3f; trainData.at<float>(11, 2) = 1.7f; trainData.at<float>(11, 3) = 0.5f;
trainData.at<float>(12, 0) = 7.1999998f; trainData.at<float>(12, 1) = 3.0f; trainData.at<float>(12, 2) = 5.8000002f; trainData.at<float>(12, 3) = 1.6f;
trainData.at<float>(13, 0) = 6.0999999f; trainData.at<float>(13, 1) = 2.8f; trainData.at<float>(13, 2) = 4.0f; trainData.at<float>(13, 3) = 1.3f;
trainData.at<float>(14, 0) = 5.8000002f; trainData.at<float>(14, 1) = 2.7f; trainData.at<float>(14, 2) = 4.0999999f; trainData.at<float>(14, 3) = 1.0f;

cv::Mat trainLabels(15, 1, CV_32FC1);
trainLabels.at<float>(0, 0) = 0; trainLabels.at<float>(1, 0) = 0;
trainLabels.at<float>(2, 0) = 0; trainLabels.at<float>(3, 0) = 0;
trainLabels.at<float>(4, 0) = 0; trainLabels.at<float>(5, 0) = 0;
trainLabels.at<float>(6, 0) = 1; trainLabels.at<float>(7, 0) = 0;
trainLabels.at<float>(8, 0) = 0; trainLabels.at<float>(9, 0) = 0;
trainLabels.at<float>(10, 0) = 1; trainLabels.at<float>(11, 0) = 0;
trainLabels.at<float>(12, 0) = 1; trainLabels.at<float>(13, 0) = 0; trainLabels.at<float>(14, 0) = 0;

cv::Mat layers = cv::Mat(3, 1, CV_32SC1);
layers.row(0) = cv::Scalar(trainData.cols);
layers.row(1) = cv::Scalar(4);
layers.row(2) = cv::Scalar(1);
nn->setLayerSizes(layers);
nn->train(trainData, cv::ml::SampleTypes::ROW_SAMPLE, trainLabels);

cv::Mat out;
nn->predict(trainData.row(6), out);

for (int y = 0; y< out.cols; y++) {
    std::cout << out.row(0).col(y) << ",";
}

std::cout << std::endl;

输出为:

[nan],

trainData 矩阵有 15 行和 4 列,值是手动设置的。 trainLabels 是 15 行 1 列的矩阵。

我正在使用 Visual Studio 2015,项目是 x86。

编辑 当我使用 nn->save("file") 保存算法时,我得到以下信息:

<?xml version="1.0"?>
<opencv_storage>
<opencv_ml_ann_mlp>
  <format>3</format>
  <layer_sizes>
    4 2 1</layer_sizes>
  <activation_function>SIGMOID_SYM</activation_function>
  <f_param1>1.</f_param1>
  <f_param2>1.</f_param2>
  <min_val>0.</min_val>
  <max_val>0.</max_val>
  <min_val1>0.</min_val1>
  <max_val1>0.</max_val1>
  <training_params>
    <train_method>BACKPROP</train_method>
    <dw_scale>1.0000000000000001e-01</dw_scale>
    <moment_scale>1.0000000000000001e-01</moment_scale>
    <term_criteria>
      <iterations>100000</iterations></term_criteria></training_params>
  <input_scale>
    3.0610774975484543e+02 -7.2105386030315177e+00
    6.5791999914499740e+02 -7.6542332347898991e+00
    1.4846784833724132e+02 -2.1387134611442429e+00
    3.7586804114718842e+02 -1.5919117803235303e+00</input_scale>
  <output_scale>
    .Inf .Nan</output_scale>
  <inv_output_scale>
    0. 0.</inv_output_scale>
  <weights>
    <_>
      -9.9393472658672849e-02 -2.6465950290426005e-01
      7.0886408359726163e-02 2.9121955862626381e-01
      5.6651702579549310e-02 -2.1540916480791003e-01
      -1.0692250684467182e-01 -2.4494868679529785e-01
      5.2300263291242721e-01 7.7835339395571990e-03</_>
    <_>
      6.8110331452494011e-01 -1.4243818904976885e-01
      -1.7380883866714303e-01</_></weights></opencv_ml_ann_mlp>
</opencv_storage>

好的,经过一段时间尝试可能的组合后,我找到了解决方案。

激活函数必须在设置图层大小后设置。我不知道为什么,但是当我像这样翻转行时

nn->setLayerSizes(layers);
nn->setActivationFunction(cv::ml::ANN_MLP::SIGMOID_SYM);

它的工作。如果有人知道这是什么原因,请告诉我。