神经网络不产生结果

Neural network not producing results

这是我的项目。它包括: m = 24 其中 m 是训练示例的数量; 3个隐藏层和输入层;连接每一层的3组权重;数据为 1x38,响应为 y (1x1)。

import numpy as np
x = np.array([
[1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0],
[1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0],
[1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0],
[1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0],
[0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0],
[1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0],
[1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0],
[1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0],
[1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0],
[1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1],
[1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1],
[1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0],
[1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0],
[1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0],
[0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0],
[1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1],
[1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0],
[0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0],
[1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0],
[0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0],
[1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0],
[1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0],
[1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0],
[0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0]])

y = np.array([
    [1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1,0]]).T

w = np.random.random((38, 39))
w2 = np.random.random((39, 39))
w3 = np.random.random((39, 1))

for j in xrange(100000):
    a2 = 1/(1 + np.exp(-(np.dot(x, w) + 1)))
    a3 = 1/(1 + np.exp(-(np.dot(a2, w2) + 1)))
    a4 = 1/(1 + np.exp(-(np.dot(a3, w3) + 1)))
    a4delta = (y - a4) * (1 * (1 - a4))
    a3delta = a4delta.dot(w3.T) * (1 * (1 - a3))
    a2delta = a3delta.dot(w2.T) * (1 * (1 - a2))
    w3 += a3.T.dot(a4delta)
    w2 += a2.T.dot(a3delta)
    w += x.T.dot(a2delta)
print(a4)

结果如下:

[[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]]

谁能看看我有没有做错?我的网络需要更改吗?我尝试通过添加更多隐藏层和更多内存来试验超参数

你有一些错误,有些事情我认为是错误的,但也许只是不同的实现。

您将梯度添加到权重中,而您应该减去乘以步长的梯度。这就是为什么您的权重仅在一次迭代中就飙升至 1.0 的原因。

这些:

w3 += a3.T.dot(a4delta)

应该是这样的:

 w3 -= addBias(a3).T.dot(a4delta) * step

此外,我认为您对 sigmoid 函数的偏导数没有正确的表述。我认为这些:

a3delta = a4delta.dot(w3.T) * (1 * (1 - a3)) 

应该是:

a3delta = a4delta.dot(w3.T) * (a3 * (1 - a3))

您还应该使用类似以下内容将您的权重初始化为零:

ep = 0.12
w = np.random.random((39, 39)) * 2 * ep - ep

大多数实现都会向每一层添加一个偏置节点,而您并没有这样做。它使事情变得有点复杂,但我认为它会使它收敛得更快。

对我来说,这会在 200 次迭代中收敛到一个自信的答案:

# Weights have different shapes to account for bias node
w = np.random.random((39, 39)) * 2 * ep - ep
w2 = np.random.random((40, 39))* 2 * ep - ep
w3 = np.random.random((40, 1)) * 2 * ep - ep

ep = 0.12
w = np.random.random((39, 39)) * 2 * ep - ep
w2 = np.random.random((40, 39))* 2 * ep - ep
w3 = np.random.random((40, 1)) * 2 * ep - ep

def addBias(mat):
    return np.hstack((np.ones((mat.shape[0], 1)), mat))

step = -.1
for j in range(200):
    # Forward prop
    a2 = 1/(1 + np.exp(- addBias(x).dot(w)))
    a3 = 1/(1 + np.exp(- addBias(a2).dot(w2)))
    a4 = 1/(1 + np.exp(- addBias(a3).dot(w3)))

    # Back prop
    a4delta = (y - a4) 
    # need to remove bias nodes here
    a3delta = a4delta.dot(w3[1:,:].T) * (a3 * (1 - a3))
    a2delta = a3delta.dot(w2[1:,:].T) * (a2 * (1 - a2))

    # Gradient Descent
    # Multiply gradient by step then subtract
    w3 -= addBias(a3).T.dot(a4delta) * step
    w2 -= addBias(a2).T.dot(a3delta) * step 
    w -= addBias(x).T.dot(a2delta) * step
print(np.rint(a4))