为什么每次初始权重相同时我得到的神经网络训练结果都不一样?
Why do I get different neural network training results each time even the initial weights are the same?
我知道如果初始权值和bias是随机的,结果会不一样,所以我使用遗传算法优化BP神经网络的结构,在训练前设置了GA给定的初始权值和bias .
我是用Matlab R2014a做的,代码如下:
clc
clear all;
LoopTime = 100;
NetAll = cell(1,LoopTime);
MatFileToSave = sprintf('BPTraining_%4d.mat',LoopTime);
input_train = xlsread('test.xls','D1:F40')';
output_train = xlsread('test.xls','H1:H40')';
[inputn,inputps] = mapminmax(input_train);
[outputn,outputps] = mapminmax(output_train);
A=[];
if ~exist(MatFileToSave,'file')
for ii = 1:LoopTime
net.divideFcn = 'dividerand';
net.divideMode = 'sample';
net=newff(inputn,outputn,7,{'tansig'},'trainlm');
net.divideParam.trainRatio = 70/100;
net.divideParam.valRatio = 30/100;
net.divideParam.testRatio = 0/100;
net.trainParam.epochs=2000;
net.trainParam.lr=0.1;
net.trainParam.goal=0.00001;
net.iw{1,1} = [0.56642385,-0.044929342,2.806006491;
-0.129892602,2.969433103,-0.056528269;
0.200067228,-1.074037985,-0.90233406;
-0.794299829,-2.202876191,0.346403187;
0.083438759,1.246476813,1.788348379;
0.889662621,1.024847111,2.428373515;
-1.24788069,1.383238864,-1.313847905];
net.b{1} = [-1.363912639;-1.978352461;-0.036013077;0.135126212;1.995020537;-0.223083372;-1.013341625];
net.lw{2,1} = [-0.412881802 -0.146069773 1.711325447 -1.091444059 -2.069737603 0.765038862 -2.825474689];
net.b{2} = [-2.182832342];
[net,tr]=train(net,inputn,outputn);
yyn = sim(net,inputn);
yy = mapminmax('reverse',yyn,outputps);
regre = min(corrcoef(yy,output_train));
error = (yy-output_train)/output_train ;
rmse = std(yy);
A = [A;ii,regre,error,rmse];
NetAll{ii} = net;
clear net;
figure
plotregression(output_train,yy,'Regression');
forder = 'regre_tr';
if ~exist(forder,'dir');
mkdir(forder);
end
picstr = [ forder '\regre_' num2str(ii)];
print('-dpng','-r100',picstr);
close
end
save(MatFileToSave,'NetAll');
xlswrite('BPTraining_100.xls',A);
end
我写了一个 'for-end' 循环来检查每次的结果是否相同,但是回归系数从 0.8 到 0.98 不等,与预期的不一样。
那么,我的问题是:
- 我设置初始权重的代码是否正确?如果没有,如何设置?
- 如果正确,为什么结果还是不一样?
正如@Neil Slater 所说:deviderand 函数中的某处是一个随机部分,如果您不修复它的随机性,结果可能会有所不同。尝试:
[a,b,c]=dividerand(10,0.7,0.15,0.15)
多次,结果发生变化。您可以像 Neil 建议的那样选择不同的 'net.divideMode',也可以使用固定的随机种子创建自己的除法器函数(我跳过反映固定除法器不再是真正随机的事实):
open dividerand %opens the matlab function
%change the name of the function in line 1
function [out1,out2,out3,out4,out5,out6] = mydividerand(in1,varargin)
%go to line 105 and add
rng(1); fixing the random seed
allInd = randperm(Q); %previous line 105 that is the random part of the function
%use 'save as' and save it to your workfolder
请记住,改变原来的 'dividerand' 是不明智的,所以在你接触任何东西之前,最好先选择 'save as'。此外,新函数名称应该不同且唯一(如 mydividerand)
[a,b,c]=mydividerand(10,0.7,0.15,0.15)%always the same result
我知道如果初始权值和bias是随机的,结果会不一样,所以我使用遗传算法优化BP神经网络的结构,在训练前设置了GA给定的初始权值和bias . 我是用Matlab R2014a做的,代码如下:
clc
clear all;
LoopTime = 100;
NetAll = cell(1,LoopTime);
MatFileToSave = sprintf('BPTraining_%4d.mat',LoopTime);
input_train = xlsread('test.xls','D1:F40')';
output_train = xlsread('test.xls','H1:H40')';
[inputn,inputps] = mapminmax(input_train);
[outputn,outputps] = mapminmax(output_train);
A=[];
if ~exist(MatFileToSave,'file')
for ii = 1:LoopTime
net.divideFcn = 'dividerand';
net.divideMode = 'sample';
net=newff(inputn,outputn,7,{'tansig'},'trainlm');
net.divideParam.trainRatio = 70/100;
net.divideParam.valRatio = 30/100;
net.divideParam.testRatio = 0/100;
net.trainParam.epochs=2000;
net.trainParam.lr=0.1;
net.trainParam.goal=0.00001;
net.iw{1,1} = [0.56642385,-0.044929342,2.806006491;
-0.129892602,2.969433103,-0.056528269;
0.200067228,-1.074037985,-0.90233406;
-0.794299829,-2.202876191,0.346403187;
0.083438759,1.246476813,1.788348379;
0.889662621,1.024847111,2.428373515;
-1.24788069,1.383238864,-1.313847905];
net.b{1} = [-1.363912639;-1.978352461;-0.036013077;0.135126212;1.995020537;-0.223083372;-1.013341625];
net.lw{2,1} = [-0.412881802 -0.146069773 1.711325447 -1.091444059 -2.069737603 0.765038862 -2.825474689];
net.b{2} = [-2.182832342];
[net,tr]=train(net,inputn,outputn);
yyn = sim(net,inputn);
yy = mapminmax('reverse',yyn,outputps);
regre = min(corrcoef(yy,output_train));
error = (yy-output_train)/output_train ;
rmse = std(yy);
A = [A;ii,regre,error,rmse];
NetAll{ii} = net;
clear net;
figure
plotregression(output_train,yy,'Regression');
forder = 'regre_tr';
if ~exist(forder,'dir');
mkdir(forder);
end
picstr = [ forder '\regre_' num2str(ii)];
print('-dpng','-r100',picstr);
close
end
save(MatFileToSave,'NetAll');
xlswrite('BPTraining_100.xls',A);
end
我写了一个 'for-end' 循环来检查每次的结果是否相同,但是回归系数从 0.8 到 0.98 不等,与预期的不一样。
那么,我的问题是:
- 我设置初始权重的代码是否正确?如果没有,如何设置?
- 如果正确,为什么结果还是不一样?
正如@Neil Slater 所说:deviderand 函数中的某处是一个随机部分,如果您不修复它的随机性,结果可能会有所不同。尝试:
[a,b,c]=dividerand(10,0.7,0.15,0.15)
多次,结果发生变化。您可以像 Neil 建议的那样选择不同的 'net.divideMode',也可以使用固定的随机种子创建自己的除法器函数(我跳过反映固定除法器不再是真正随机的事实):
open dividerand %opens the matlab function
%change the name of the function in line 1
function [out1,out2,out3,out4,out5,out6] = mydividerand(in1,varargin)
%go to line 105 and add
rng(1); fixing the random seed
allInd = randperm(Q); %previous line 105 that is the random part of the function
%use 'save as' and save it to your workfolder
请记住,改变原来的 'dividerand' 是不明智的,所以在你接触任何东西之前,最好先选择 'save as'。此外,新函数名称应该不同且唯一(如 mydividerand)
[a,b,c]=mydividerand(10,0.7,0.15,0.15)%always the same result