在纯 python 中计算 NN 的梯度

Calculating gradient of NN in pure python

import numpy

# Data and parameters

X  = numpy.array([[-1.086,  0.997,  0.283, -1.506]])
T  = numpy.array([[-0.579]])
W1 = numpy.array([[-0.339, -0.047,  0.746, -0.319, -0.222, -0.217],
                      [ 1.103,  1.093,  0.502,  0.193,  0.369,  0.745],
                      [-0.468,  0.588, -0.627, -0.319,  0.454, -0.714],
                      [-0.070, -0.431, -0.128, -1.399, -0.886, -0.350]])
W2 = numpy.array([[ 0.379, -0.071,  0.001,  0.281, -0.359,  0.116],
                      [-0.329, -0.705, -0.160,  0.234,  0.138, -0.005],
                      [ 0.977,  0.169,  0.400,  0.914, -0.528, -0.424],
                      [ 0.712, -0.326,  0.012,  0.437,  0.364,  0.716],
                      [ 0.611,  0.437, -0.315,  0.325,  0.128, -0.541],
                      [ 0.579,  0.330,  0.019, -0.095, -0.489,  0.081]])
W3 = numpy.array([[ 0.191, -0.339,  0.474, -0.448, -0.867,  0.424],
                      [-0.165, -0.051, -0.342, -0.656,  0.512, -0.281],
                      [ 0.678,  0.330, -0.128, -0.443, -0.299, -0.495],
                      [ 0.852,  0.067,  0.470, -0.517,  0.074,  0.481],
                      [-0.137,  0.421, -0.443, -0.557,  0.155, -0.155],
                      [ 0.262, -0.807,  0.291,  1.061, -0.010,  0.014]])
W4 = numpy.array([[ 0.073],
                      [-0.760],
                      [ 0.174],
                      [-0.655],
                      [-0.175],
                      [ 0.507]])
B1 = numpy.array([-0.760,  0.174, -0.655, -0.175,  0.507, -0.300])
B2 = numpy.array([ 0.205,  0.413,  0.114, -0.560, -0.136,  0.800])
B3 = numpy.array([-0.827, -0.113, -0.225,  0.049,  0.305,  0.657])
B4 = numpy.array([-0.270])

# Forward pass

Z1 = X.dot(W[0])+B[0]
A1 = numpy.maximum(0,Z1)
Z2 = A1.dot(W[1])+B[1]
A2 = numpy.maximum(0,Z2)
Z3 = A2.dot(W[2])+B[2]
A3 = numpy.maximum(0,Z3)
Y  = A3.dot(W[3])+B[3];

# Error

err = ((Y-T)**2).mean()

鉴于此示例,我想实现向后传递,并获得关于权重和偏差参数的梯度。显然,最后一层的梯度如下:

DY = 2*(Y-T)
DB4 = DY.mean(axis=0)
DW4 = A3.T.dot(DY) / len(X)
DZ3 = DY.dot(W4.T)*(Z3 > 0)

我知道不同的导数是用链式法则计算的,但我不太明白你是怎么得出这个解法的。

例如,DYerr关于Y的导数,所以

d/dY (Y - T)**2 == 2 * (Y - T)

这是一个普通的旧衍生品,还没有链式规则。

看起来像 DB4,使用链式法则:

d/dB[3] err == d/dB[3] (A3 @ W[3] + B[3] - T)**2
== 2 * (A3 @ W[3] + B[3] - T) * d/dB[3] (A3 @ W[3] + B[3] - T)
== 2 * (A3 @ W[3] + B[3] - T) * 1
== 2 * (Y - T)
== DY

DW4 是:

d/dW[3] err == d/dW[3] (A3 @ W[3] + B[3] - T)**2
== 2 * (A3 @ W[3] + B[3] - T) @ (d/dW[3] (A3 @ W[3] + B[3] - T))
== 2 * (Y - T) @ A3.T
[must match matrix shape]
== A3.T @ DY

A3.T @ DY 的诀窍是 d/dW[3] (A3 @ W[3]) = A3.T: https://math.stackexchange.com/questions/1866757/not-understanding-derivative-of-a-matrix-matrix-product.

为了在计算DZ3 == d/dZ3 err时通过A3进行区分,应该考虑激活函数(TBH,我认为Y = A3.dot(W[3])+B[3]应该是Y = numpy.maximum(0, A3.dot(W[3])+B[3]),因为最终输出应该是激活函数的结果,但也许你的网络架构不会那样做),在你的情况下是 ReLU

让我们用(偏)导数的链式法则和矩阵微分法则,参考下图神经网络的最后一个隐藏层反向传播(MSE)误差进行回归:

E = err = (Y - T)**2 (take mean over the batch to compute MSE)

DY = ∂E/∂Y = 2 * (Y - T)

∂E/∂W3 = (∂E/∂Y).(∂Y/∂W3)
= DY. (∂/∂W3 (A3.W3+B3)) = DY.A3.T

= A3.T.DY (take mean over all training examples in training batch X: sum and divide by batch size |X|)

∂E/∂B3 = (∂E/∂Y).(∂Y/∂B3)
= DY. (∂/∂B3 (A3.W3+B3)) = DY.1

= DY (take mean over all the examples in a batch)

∂E/∂Z3
= (∂E/∂Y).(∂Y/∂A3).(∂A3/∂Z3)

= DY.(∂/∂A3 (A3.W3+B3)).(1.{Z3>0} + 0.{Z3 <= 0})

= DY. W3.T. {Z3 > 0), where (.) is the indicator function. Using the definition of nonlinear RELU activation, the derivative is 1 when Z3>0, otherwise 0.