反向传播:为什么乘以 sigmoid 的导数时误差不趋近于零?

Backpropagation: Why doesn't error approach zero when it is multiplied by deriv of sigmoid?

我正在尝试对我的简单神经网络实施反向传播,它看起来像这样:2 个输入,2 个隐藏(sigmoid),1 个输出(sigmoid)。但是好像不能正常使用。

import numpy as np

 # Set inputs and labels
 X = np.array([ [0, 1],
                [0, 1],
                [1, 0],
                [1, 0] ])

 Y = np.array([[0, 0, 1, 1]]).T

 # Make random always the same
 np.random.seed(1)
 # Initialize weights
 w_0 = 2 * np.random.rand(2, 2) - 1
 w_1 = 2 * np.random.rand(1, 2) - 1

 # Learning Rate
 lr = 0.1

 # Sigmoid Function/Derivative of Sigmoid Function
 def sigmoid(x, deriv=False):
     if(deriv==True):
         return x * (1 - x)
     return 1/(1 + np.exp(-x))

 # Neural network
 def network(x, y, w_0, w_1):
     inputs = np.array(x, ndmin=2).T
     label = np.array(y, ndmin=2).T

     # Forward Pass
     hidden = sigmoid(np.dot(w_0, inputs))
     output = sigmoid(np.dot(w_1, hidden))

     # Calculate error and delta
     error = label - output
     delta = error * sigmoid(output, True)

     hidden_error = np.dot(w_1.T, error)
     delta_hidden = error * sigmoid(hidden, True)

     # Update weight
     w_1 += np.dot(delta, hidden.T) * lr
     w_0 += np.dot(delta_hidden, record.T) * lr

     return error

 # Train
 for i in range(6000):
     for j in range(X.shape[0]):
         error = network(X[j], Y[j], w_0, w_1)

         if(i%1000==0):
             print(error)

当我打印出我的错误时,我得到: .

这是不对的,因为它不接近于 0。

当我将增量更改为:

delta = error

它以某种方式工作。

但是为什么呢?我们不应该在进一步传递之前将误差乘以 sigmoid 函数的导数吗?

我觉得应该是

delta_hidden = hidden_error * sigmoid(hidden, True)