Python 中随机梯度下降代码的分解
Breakdown of Stochastic Gradient Descent Code in Python
在 Michael Nielson 关于人工神经网络的在线书籍 http://neuralnetworksanddeeplearning.com 中,他提供了以下代码:
def update_mini_batch(self, mini_batch, eta):
"""Update the network's weights and biases by applying
gradient descent using backpropagation to a single mini batch.
The ``mini_batch`` is a list of tuples ``(x, y)``, and ``eta``
is the learning rate."""
nabla_b = [np.zeros(b.shape) for b in self.biases]
nabla_w = [np.zeros(w.shape) for w in self.weights]
for x, y in mini_batch:
delta_nabla_b, delta_nabla_w = self.backprop(x, y)
nabla_b = [nb+dnb for nb, dnb in zip(nabla_b, delta_nabla_b)]
nabla_w = [nw+dnw for nw, dnw in zip(nabla_w, delta_nabla_w)]
self.weights = [w-(eta/len(mini_batch))*nw
for w, nw in zip(self.weights, nabla_w)]
self.biases = [b-(eta/len(mini_batch))*nb
for b, nb in zip(self.biases, nabla_b)]
我无法理解 nabla_b 和 nabla_w 的部分。
如果delta_nabla_b
和delta_nabla_w
是成本函数的梯度那么为什么我们在这里将它们添加到nabla_b和nabla_w的现有值中?
nabla_b = [nb+dnb for nb, dnb in zip(nabla_b, delta_nabla_b)]
nabla_w = [nw+dnw for nw, dnw in zip(nabla_w, delta_nabla_w)]
我们不应该直接定义
nabla_b, nabla_w = self.backprop(x, y)
并更新权重和偏置矩阵?
我们制作 nabla_b
和 nabla_w
是因为我们想对梯度进行平均并且它们是梯度总和的矩阵吗?
Do we make nabla_b and nabla_w because we want to do an average over the gradients and they are the matrices of the sums of the gradients?
是的,你的想法是对的。基本上这段代码直接对应教程中第3步梯度下降中的公式
公式本身有点误导,直觉上更容易认为权重和偏置是独立更新小批量中的每个实例。但是如果你还记得总和的梯度是梯度的总和,就会清楚它实际上是相同的。在这两种情况下,所有梯度都以相同的方式对参数更新做出贡献。
在 Michael Nielson 关于人工神经网络的在线书籍 http://neuralnetworksanddeeplearning.com 中,他提供了以下代码:
def update_mini_batch(self, mini_batch, eta):
"""Update the network's weights and biases by applying
gradient descent using backpropagation to a single mini batch.
The ``mini_batch`` is a list of tuples ``(x, y)``, and ``eta``
is the learning rate."""
nabla_b = [np.zeros(b.shape) for b in self.biases]
nabla_w = [np.zeros(w.shape) for w in self.weights]
for x, y in mini_batch:
delta_nabla_b, delta_nabla_w = self.backprop(x, y)
nabla_b = [nb+dnb for nb, dnb in zip(nabla_b, delta_nabla_b)]
nabla_w = [nw+dnw for nw, dnw in zip(nabla_w, delta_nabla_w)]
self.weights = [w-(eta/len(mini_batch))*nw
for w, nw in zip(self.weights, nabla_w)]
self.biases = [b-(eta/len(mini_batch))*nb
for b, nb in zip(self.biases, nabla_b)]
我无法理解 nabla_b 和 nabla_w 的部分。
如果delta_nabla_b
和delta_nabla_w
是成本函数的梯度那么为什么我们在这里将它们添加到nabla_b和nabla_w的现有值中?
nabla_b = [nb+dnb for nb, dnb in zip(nabla_b, delta_nabla_b)]
nabla_w = [nw+dnw for nw, dnw in zip(nabla_w, delta_nabla_w)]
我们不应该直接定义
nabla_b, nabla_w = self.backprop(x, y)
并更新权重和偏置矩阵?
我们制作 nabla_b
和 nabla_w
是因为我们想对梯度进行平均并且它们是梯度总和的矩阵吗?
Do we make nabla_b and nabla_w because we want to do an average over the gradients and they are the matrices of the sums of the gradients?
是的,你的想法是对的。基本上这段代码直接对应教程中第3步梯度下降中的公式
公式本身有点误导,直觉上更容易认为权重和偏置是独立更新小批量中的每个实例。但是如果你还记得总和的梯度是梯度的总和,就会清楚它实际上是相同的。在这两种情况下,所有梯度都以相同的方式对参数更新做出贡献。