梯度下降 ANN - MATLAB 正在做什么而我没有?
Gradient Descent ANN - What is MATLAB doing that I'm not?
我正在尝试使用梯度下降反向传播在 Python 中重新创建一个简单的 MLP 人工神经网络。我的目标是尝试重新创建 MATLAB 的 ANN 产生的精度,但我什至没有接近。我使用与 MATLAB 相同的参数;相同数量的隐藏节点 (20)、1000 个纪元、0.01 的学习率 (alpha) 和相同的数据(显然),但我的代码在改进结果方面没有取得进展,而 MATLAB 的准确度在 98% 左右。
我试图通过 MATLAB 进行调试以查看它在做什么,但我的运气并不好。我相信 MATLAB 在 0 和 1 之间缩放输入数据,并为输入添加偏差,我在 Python 代码中使用了这两者。
MATLAB 做了什么使结果高得多?或者,更有可能的是,我的 Python 代码做错了什么导致结果如此糟糕?我能想到的是权重启动不当、数据读取不正确、处理数据操作不正确或 incorrect/poor 激活函数(我也尝试过 tanh,结果相同)。
我的尝试在下面,基于我在网上找到的代码并稍微调整以读取我的数据,而 MATLAB 脚本(只有 11 行代码)在下面。底部是我使用的数据集的 link(我也是通过 MATLAB 获得的):
感谢您的帮助。
Main.py
import numpy as np
import Process
import matplotlib.pyplot as plt
from sklearn.metrics import confusion_matrix, classification_report
from sklearn.cross_validation import train_test_split
from sklearn.preprocessing import LabelBinarizer
import warnings
def sigmoid(x):
return 1.0/(1.0 + np.exp(-x))
def sigmoid_prime(x):
return sigmoid(x)*(1.0-sigmoid(x))
class NeuralNetwork:
def __init__(self, layers):
self.activation = sigmoid
self.activation_prime = sigmoid_prime
# Set weights
self.weights = []
# layers = [2,2,1]
# range of weight values (-1,1)
# input and hidden layers - random((2+1, 2+1)) : 3 x 3
for i in range(1, len(layers) - 1):
r = 2*np.random.random((layers[i-1] + 1, layers[i] + 1)) - 1
self.weights.append(r)
# output layer - random((2+1, 1)) : 3 x 1
r = 2*np.random.random((layers[i] + 1, layers[i+1])) - 1
self.weights.append(r)
def fit(self, X, y, learning_rate, epochs):
# Add column of ones to X
# This is to add the bias unit to the input layer
ones = np.atleast_2d(np.ones(X.shape[0]))
X = np.concatenate((ones.T, X), axis=1)
for k in range(epochs):
i = np.random.randint(X.shape[0])
a = [X[i]]
for l in range(len(self.weights)):
dot_value = np.dot(a[l], self.weights[l])
activation = self.activation(dot_value)
a.append(activation)
# output layer
error = y[i] - a[-1]
deltas = [error * self.activation_prime(a[-1])]
# we need to begin at the second to last layer
# (a layer before the output layer)
for l in range(len(a) - 2, 0, -1):
deltas.append(deltas[-1].dot(self.weights[l].T)*self.activation_prime(a[l]))
# reverse
# [level3(output)->level2(hidden)] => [level2(hidden)->level3(output)]
deltas.reverse()
# backpropagation
# 1. Multiply its output delta and input activation
# to get the gradient of the weight.
# 2. Subtract a ratio (percentage) of the gradient from the weight.
for i in range(len(self.weights)):
layer = np.atleast_2d(a[i])
delta = np.atleast_2d(deltas[i])
self.weights[i] += learning_rate * layer.T.dot(delta)
def predict(self, x):
a = np.concatenate((np.ones(1).T, np.array(x)))
for l in range(0, len(self.weights)):
a = self.activation(np.dot(a, self.weights[l]))
return a
# Create neural net, 13 inputs, 20 hidden nodes, 3 outputs
nn = NeuralNetwork([13, 20, 3])
data = Process.readdata('wine')
# Split data out into input and output
X = data[0]
y = data[1]
# Normalise input data between 0 and 1.
X -= X.min()
X /= X.max()
# Split data into training and test sets (15% testing)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.15)
# Create binay output form
y_ = LabelBinarizer().fit_transform(y_train)
# Train data
lrate = 0.01
epoch = 1000
nn.fit(X_train, y_, lrate, epoch)
# Test data
err = []
for e in X_test:
# Create array of output data (argmax to get classification)
err.append(np.argmax(nn.predict(e)))
# Hide warnings. UndefinedMetricWarning thrown when confusion matrix returns 0 in any one of the classifiers.
warnings.filterwarnings('ignore')
# Produce confusion matrix and classification report
print(confusion_matrix(y_test, err))
print(classification_report(y_test, err))
# Plot actual and predicted data
plt.figure(figsize=(10, 8))
target, = plt.plot(y_test, color='b', linestyle='-', lw=1, label='Target')
estimated, = plt.plot(err, color='r', linestyle='--', lw=3, label='Estimated')
plt.legend(handles=[target, estimated])
plt.xlabel('# Samples')
plt.ylabel('Classification Value')
plt.grid()
plt.show()
Process.py
import csv
import numpy as np
# Add constant column of 1's
def addones(arrayvar):
return np.hstack((np.ones((arrayvar.shape[0], 1)), arrayvar))
def readdata(loc):
# Open file and calculate the number of columns and the number of rows. The number of rows has a +1 as the 'next'
# operator in num_cols has already pasted over the first row.
with open(loc + '.input.csv') as f:
file = csv.reader(f, delimiter=',', skipinitialspace=True)
num_cols = len(next(file))
num_rows = len(list(file))+1
# Create a zero'd array based on the number of column and rows previously found.
x = np.zeros((num_rows, num_cols))
y = np.zeros(num_rows)
# INPUT #
# Loop through the input file and put each row into a new row of 'samples'
with open(loc + '.input.csv', newline='') as csvfile:
file = csv.reader(csvfile, delimiter=',')
count = 0
for row in file:
x[count] = row
count += 1
# OUTPUT #
# Do the same and loop through the output file.
with open(loc + '.output.csv', newline='') as csvfile:
file = csv.reader(csvfile, delimiter=',')
count = 0
for row in file:
y[count] = row[0]
count += 1
# Set data type
x = np.array(x).astype(np.float)
y = np.array(y).astype(np.int)
return x, y
MATLAB 脚本
%% LOAD DATA
[x1,t1] = wine_dataset;
%% SET UP NN
net = patternnet(20);
net.trainFcn = 'traingd';
net.layers{2}.transferFcn = 'logsig';
net.derivFcn = 'logsig';
%% TRAIN AND TEST
[net,tr] = train(net,x1,t1);
我认为您混淆了术语 epoch
和 step
。如果你训练了一个 epoch
它通常指的是 运行 通过所有数据。
例如:如果您有 10.000 个样本,那么您已将所有 10.000 个样本(忽略样本的随机抽样)放入您的模型并每次采取一个步骤(更新您的权重)。
修复: 运行 您的网络更长时间:
nn.fit(X_train, y_, lrate, epoch*len(X))
奖金:
MatLab 的文档将 epochs 翻译成 (iterations)
here which is misleading, but comments on it here 这基本上就是我上面写的。
我相信我已经找到问题所在。这是数据集本身(并非所有数据集都出现此问题)和我缩放数据的方式的组合。我原来的缩放方法处理的结果介于 0 和 1 之间,对这种情况没有帮助,导致看到的结果很差:
# Normalise input data between 0 and 1.
X -= X.min()
X /= X.max()
我发现了另一种缩放方法,由 sklearn 预处理包提供:
from sklearn import preprocessing
X = preprocessing.scale(X)
这种缩放方法不在 0 和 1 之间,我进一步调查以确定为什么它有这么大的帮助,但现在返回的结果准确率为 96 到 100%。与 MATLAB 结果非常一致,我认为这是使用类似(或相同)的预处理缩放方法。
正如我上面所说,并非所有数据集都是这种情况。使用内置的 sklearn 虹膜或数字数据集似乎可以在不缩放的情况下产生良好的结果。
我正在尝试使用梯度下降反向传播在 Python 中重新创建一个简单的 MLP 人工神经网络。我的目标是尝试重新创建 MATLAB 的 ANN 产生的精度,但我什至没有接近。我使用与 MATLAB 相同的参数;相同数量的隐藏节点 (20)、1000 个纪元、0.01 的学习率 (alpha) 和相同的数据(显然),但我的代码在改进结果方面没有取得进展,而 MATLAB 的准确度在 98% 左右。
我试图通过 MATLAB 进行调试以查看它在做什么,但我的运气并不好。我相信 MATLAB 在 0 和 1 之间缩放输入数据,并为输入添加偏差,我在 Python 代码中使用了这两者。
MATLAB 做了什么使结果高得多?或者,更有可能的是,我的 Python 代码做错了什么导致结果如此糟糕?我能想到的是权重启动不当、数据读取不正确、处理数据操作不正确或 incorrect/poor 激活函数(我也尝试过 tanh,结果相同)。
我的尝试在下面,基于我在网上找到的代码并稍微调整以读取我的数据,而 MATLAB 脚本(只有 11 行代码)在下面。底部是我使用的数据集的 link(我也是通过 MATLAB 获得的):
感谢您的帮助。
Main.py
import numpy as np
import Process
import matplotlib.pyplot as plt
from sklearn.metrics import confusion_matrix, classification_report
from sklearn.cross_validation import train_test_split
from sklearn.preprocessing import LabelBinarizer
import warnings
def sigmoid(x):
return 1.0/(1.0 + np.exp(-x))
def sigmoid_prime(x):
return sigmoid(x)*(1.0-sigmoid(x))
class NeuralNetwork:
def __init__(self, layers):
self.activation = sigmoid
self.activation_prime = sigmoid_prime
# Set weights
self.weights = []
# layers = [2,2,1]
# range of weight values (-1,1)
# input and hidden layers - random((2+1, 2+1)) : 3 x 3
for i in range(1, len(layers) - 1):
r = 2*np.random.random((layers[i-1] + 1, layers[i] + 1)) - 1
self.weights.append(r)
# output layer - random((2+1, 1)) : 3 x 1
r = 2*np.random.random((layers[i] + 1, layers[i+1])) - 1
self.weights.append(r)
def fit(self, X, y, learning_rate, epochs):
# Add column of ones to X
# This is to add the bias unit to the input layer
ones = np.atleast_2d(np.ones(X.shape[0]))
X = np.concatenate((ones.T, X), axis=1)
for k in range(epochs):
i = np.random.randint(X.shape[0])
a = [X[i]]
for l in range(len(self.weights)):
dot_value = np.dot(a[l], self.weights[l])
activation = self.activation(dot_value)
a.append(activation)
# output layer
error = y[i] - a[-1]
deltas = [error * self.activation_prime(a[-1])]
# we need to begin at the second to last layer
# (a layer before the output layer)
for l in range(len(a) - 2, 0, -1):
deltas.append(deltas[-1].dot(self.weights[l].T)*self.activation_prime(a[l]))
# reverse
# [level3(output)->level2(hidden)] => [level2(hidden)->level3(output)]
deltas.reverse()
# backpropagation
# 1. Multiply its output delta and input activation
# to get the gradient of the weight.
# 2. Subtract a ratio (percentage) of the gradient from the weight.
for i in range(len(self.weights)):
layer = np.atleast_2d(a[i])
delta = np.atleast_2d(deltas[i])
self.weights[i] += learning_rate * layer.T.dot(delta)
def predict(self, x):
a = np.concatenate((np.ones(1).T, np.array(x)))
for l in range(0, len(self.weights)):
a = self.activation(np.dot(a, self.weights[l]))
return a
# Create neural net, 13 inputs, 20 hidden nodes, 3 outputs
nn = NeuralNetwork([13, 20, 3])
data = Process.readdata('wine')
# Split data out into input and output
X = data[0]
y = data[1]
# Normalise input data between 0 and 1.
X -= X.min()
X /= X.max()
# Split data into training and test sets (15% testing)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.15)
# Create binay output form
y_ = LabelBinarizer().fit_transform(y_train)
# Train data
lrate = 0.01
epoch = 1000
nn.fit(X_train, y_, lrate, epoch)
# Test data
err = []
for e in X_test:
# Create array of output data (argmax to get classification)
err.append(np.argmax(nn.predict(e)))
# Hide warnings. UndefinedMetricWarning thrown when confusion matrix returns 0 in any one of the classifiers.
warnings.filterwarnings('ignore')
# Produce confusion matrix and classification report
print(confusion_matrix(y_test, err))
print(classification_report(y_test, err))
# Plot actual and predicted data
plt.figure(figsize=(10, 8))
target, = plt.plot(y_test, color='b', linestyle='-', lw=1, label='Target')
estimated, = plt.plot(err, color='r', linestyle='--', lw=3, label='Estimated')
plt.legend(handles=[target, estimated])
plt.xlabel('# Samples')
plt.ylabel('Classification Value')
plt.grid()
plt.show()
Process.py
import csv
import numpy as np
# Add constant column of 1's
def addones(arrayvar):
return np.hstack((np.ones((arrayvar.shape[0], 1)), arrayvar))
def readdata(loc):
# Open file and calculate the number of columns and the number of rows. The number of rows has a +1 as the 'next'
# operator in num_cols has already pasted over the first row.
with open(loc + '.input.csv') as f:
file = csv.reader(f, delimiter=',', skipinitialspace=True)
num_cols = len(next(file))
num_rows = len(list(file))+1
# Create a zero'd array based on the number of column and rows previously found.
x = np.zeros((num_rows, num_cols))
y = np.zeros(num_rows)
# INPUT #
# Loop through the input file and put each row into a new row of 'samples'
with open(loc + '.input.csv', newline='') as csvfile:
file = csv.reader(csvfile, delimiter=',')
count = 0
for row in file:
x[count] = row
count += 1
# OUTPUT #
# Do the same and loop through the output file.
with open(loc + '.output.csv', newline='') as csvfile:
file = csv.reader(csvfile, delimiter=',')
count = 0
for row in file:
y[count] = row[0]
count += 1
# Set data type
x = np.array(x).astype(np.float)
y = np.array(y).astype(np.int)
return x, y
MATLAB 脚本
%% LOAD DATA
[x1,t1] = wine_dataset;
%% SET UP NN
net = patternnet(20);
net.trainFcn = 'traingd';
net.layers{2}.transferFcn = 'logsig';
net.derivFcn = 'logsig';
%% TRAIN AND TEST
[net,tr] = train(net,x1,t1);
我认为您混淆了术语 epoch
和 step
。如果你训练了一个 epoch
它通常指的是 运行 通过所有数据。
例如:如果您有 10.000 个样本,那么您已将所有 10.000 个样本(忽略样本的随机抽样)放入您的模型并每次采取一个步骤(更新您的权重)。
修复: 运行 您的网络更长时间:
nn.fit(X_train, y_, lrate, epoch*len(X))
奖金:
MatLab 的文档将 epochs 翻译成 (iterations)
here which is misleading, but comments on it here 这基本上就是我上面写的。
我相信我已经找到问题所在。这是数据集本身(并非所有数据集都出现此问题)和我缩放数据的方式的组合。我原来的缩放方法处理的结果介于 0 和 1 之间,对这种情况没有帮助,导致看到的结果很差:
# Normalise input data between 0 and 1.
X -= X.min()
X /= X.max()
我发现了另一种缩放方法,由 sklearn 预处理包提供:
from sklearn import preprocessing
X = preprocessing.scale(X)
这种缩放方法不在 0 和 1 之间,我进一步调查以确定为什么它有这么大的帮助,但现在返回的结果准确率为 96 到 100%。与 MATLAB 结果非常一致,我认为这是使用类似(或相同)的预处理缩放方法。
正如我上面所说,并非所有数据集都是这种情况。使用内置的 sklearn 虹膜或数字数据集似乎可以在不缩放的情况下产生良好的结果。