多层感知器代码解释

Multilayered Perceptron Code Explanation

解法:

生成训练集

traincol1 = linspace(0.1, 15, 40)';
eps = (0.2*rand(40,1)) - 0.1;
traincol2 = sin(traincol1)./traincol1 - eps;
train = [traincol1 traincol2];
save('snn_a.txt','train');
save('snn_a.mat','train');

生成测试集

testcol1 = linspace(0.1, 15, 400)';
eps = (0.2*rand(400,1)) - 0.1;
testcol2 = sin(testcol1)./testcol1 - eps;
test = [testcol1 testcol2];
save('snn_b.txt','test');
save('snn_b.mat','test');

训练神经网络

function net = train_net(trainingset, hidden_neurons)    
% Parameters:     
% train_set:      
% labels - y     
% hidden_neurons_count:      
% Return value:     
% net – object representing a neural network    
% initialization     
% hidden neuron activation function- tanh,     
% output neuron activation - linear     

net=newff(trainingset(:, 1)', trainingset(:, 2)',hidden_neurons,
{'tansig', 'purelin'},'trainlm'); 
rand('state',sum(100*clock));      %random numbers generator initialization     
net=init(net);                     %weights initialization     
net.trainParam.goal = 0.01;        %stop- mse criterion     
net.trainParam.epochs = 400;       %number of epochs iterations     
net=train(net,trainingset(:, 1)', trainingset(:, 2)'); %network training 

主程序

% input data area
load('snn_a.mat');
load('snn_b.mat');
hidden_neurons = 4;
% net training
net = train_net(train, hidden_neurons);
% assigning results
resulttrain = net(train(:, 1)')';
resulttest = net(test(:, 1)')';
% drawing
hold on
sn = @(x) sin(x) / x;
fplot(sn, [0, 15],'g');
plot(train(:, 1), resulttrain, 'r');
legend('Original function', ' Result')
hold off
% print mse results
mse(net, train(:, 2)', resulttrain')
mse(net, test(:, 2)', resulttest')

你能解释一下 train_net() 和主程序吗?

有什么办法可以改进吗?

不多解释了。

train_net 基本上使用函数 newff 创建具有给定参数(隐藏神经元数、轮数、错误目标...)的前馈反向传播网络,它使用你的训练数据集来训练神经元(调整神经元的权重)。

然后您的主程序使用经过训练的神经网络来获得训练集和测试集的预测。

它最终用训练集和测试集的结果绘制了完美的预期结果,以可视化网络的性能。

最后它计算 mse 以对性能进行一些数值分析。