神经网络中的反向传播偏差

Backpropagating bias in a neural network

Andrew Traks's example 之后,我想实现一个 3 层神经网络 - 1 个输入、1 个隐藏层、1 个输出 - 带有一个简单的 dropout,用于二元分类。

如果我包含偏差项 b1b2,那么我需要稍微修改 Andrew 的代码,如下所示。

X = np.array([ [0,0,1],[0,1,1],[1,0,1],[1,1,1] ])
y = np.array([[0,1,1,0]]).T
alpha,hidden_dim,dropout_percent = (0.5,4,0.2)
synapse_0 = 2*np.random.random((X.shape[1],hidden_dim)) - 1
synapse_1 = 2*np.random.random((hidden_dim,1)) - 1
b1 = np.zeros(hidden_dim)
b2 = np.zeros(1)
for j in range(60000):
    # sigmoid activation function
    layer_1 = (1/(1+np.exp(-(np.dot(X,synapse_0) + b1))))
    # dropout
    layer_1 *= np.random.binomial([np.ones((len(X),hidden_dim))],1-dropout_percent)[0] * (1.0/(1-dropout_percent))
    layer_2 = 1/(1+np.exp(-(np.dot(layer_1,synapse_1) + b2)))
    # sigmoid derivative = s(x)(1-s(x))
    layer_2_delta = (layer_2 - y)*(layer_2*(1-layer_2))
    layer_1_delta = layer_2_delta.dot(synapse_1.T) * (layer_1 * (1-layer_1))
    synapse_1 -= (alpha * layer_1.T.dot(layer_2_delta))
    synapse_0 -= (alpha * X.T.dot(layer_1_delta))
    b1 -= alpha*layer_1_delta
    b2 -= alpha*layer_2_delta

当然,问题在于 b1 的维度与 layer_1_delta 的维度不匹配,与 b2layer_2_delta 的维度类似。

我不明白如何计算增量来更新 b1b2 - 根据 Michael Nielsen's exampleb1b2 应该是由增量更新,在我的代码中我认为分别是 layer_1_deltalayer_2_delta

我在这里做错了什么?我是否弄乱了增量或偏差的维度?我觉得是后者,因为如果我从这段代码中消除偏见,它就可以正常工作。提前致谢

所以首先我将 bX 中的 X 更改为 0 和 1 以对应于 synapse_X,因为这是它们所属的位置并且它使得:

b1 -= alpha * 1.0 / m * np.sum(layer_2_delta)
b0 -= alpha * 1.0 / m * np.sum(layer_1_delta)

其中m是训练集中的样本数。此外,掉落率高得离谱,实际上会影响收敛。所以在所有考虑的整个代码中:

import numpy as np

X = np.array([ [0,0,1],[0,1,1],[1,0,1],[1,1,1] ])
m = X.shape[0]
y = np.array([[0,1,1,0]]).T
alpha,hidden_dim,dropout_percent = (0.5,4,0.02)
synapse_0 = 2*np.random.random((X.shape[1],hidden_dim)) - 1
synapse_1 = 2*np.random.random((hidden_dim,1)) - 1
b0 = np.zeros(hidden_dim)
b1 = np.zeros(1)
for j in range(10000):
    # sigmoid activation function
    layer_1 = (1/(1+np.exp(-(np.dot(X,synapse_0) + b0))))
    # dropout
    layer_1 *= np.random.binomial([np.ones((len(X),hidden_dim))],1-dropout_percent)[0] * (1.0/(1-dropout_percent))
    layer_2 = 1/(1+np.exp(-(np.dot(layer_1,synapse_1) + b1)))
    # sigmoid derivative = s(x)(1-s(x))
    layer_2_delta = (layer_2 - y)*(layer_2*(1-layer_2))
    layer_1_delta = layer_2_delta.dot(synapse_1.T) * (layer_1 * (1-layer_1))
    synapse_1 -= (alpha * layer_1.T.dot(layer_2_delta))
    synapse_0 -= (alpha * X.T.dot(layer_1_delta))
    b1 -= alpha * 1.0 / m * np.sum(layer_2_delta)
    b0 -= alpha * 1.0 / m * np.sum(layer_1_delta)

print layer_2