对小批量更新执行 L1 正则化

Performing L1 regularization on a mini batch update

我目前正在阅读 Neural Networks and Deep Learning,但遇到了一个问题。问题是更新他给出的​​代码以使用 L1 正则化而不是 L2 正则化。

原来使用L2正则化的代码是:

def update_mini_batch(self, mini_batch, eta, lmbda, n):
    """Update the network's weights and biases by applying gradient
    descent using backpropagation to a single mini batch.  The
    ``mini_batch`` is a list of tuples ``(x, y)``, ``eta`` is the
    learning rate, ``lmbda`` is the regularization parameter, and
    ``n`` is the total size of the training data set.

    """
    nabla_b = [np.zeros(b.shape) for b in self.biases]
    nabla_w = [np.zeros(w.shape) for w in self.weights]
    for x, y in mini_batch:
        delta_nabla_b, delta_nabla_w = self.backprop(x, y)
        nabla_b = [nb+dnb for nb, dnb in zip(nabla_b, delta_nabla_b)]
        nabla_w = [nw+dnw for nw, dnw in zip(nabla_w, delta_nabla_w)]
    self.weights = [(1-eta*(lmbda/n))*w-(eta/len(mini_batch))*nw
                    for w, nw in zip(self.weights, nabla_w)]
    self.biases = [b-(eta/len(mini_batch))*nb
                   for b, nb in zip(self.biases, nabla_b)]

这里可以看出self.weights是使用L2正则化项更新的。对于 L1 正则化,我相信我只需要更新同一行以反映

书上说我们可以估计

term 使用小批量平均值。这对我来说是一个令人困惑的陈述,但我认为这意味着每个小批量对每一层使用 nabla_w 的平均值。这导致我对代码进行了以下编辑:

def update_mini_batch(self, mini_batch, eta, lmbda, n):
    """Update the network's weights and biases by applying gradient
    descent using backpropagation to a single mini batch.  The
    ``mini_batch`` is a list of tuples ``(x, y)``, ``eta`` is the
    learning rate, ``lmbda`` is the regularization parameter, and
    ``n`` is the total size of the training data set.

    """
    nabla_b = [np.zeros(b.shape) for b in self.biases]
    nabla_w = [np.zeros(w.shape) for w in self.weights]
    for x, y in mini_batch:
        delta_nabla_b, delta_nabla_w = self.backprop(x, y)
        nabla_b = [nb+dnb for nb, dnb in zip(nabla_b, delta_nabla_b)]
        nabla_w = [nw+dnw for nw, dnw in zip(nabla_w, delta_nabla_w)]
    avg_nw = [np.array([[np.average(layer)] * len(layer[0])] * len(layer))
              for layer in nabla_w]
    self.weights = [(1-eta*(lmbda/n))*w-(eta)*nw
                    for w, nw in zip(self.weights, avg_nw)]
    self.biases = [b-(eta/len(mini_batch))*nb
                   for b, nb in zip(self.biases, nabla_b)]

但我得到的结果几乎只是噪音,准确率约为 10%。我是在解释语句错误还是我的代码错误?任何提示将不胜感激。

这是不正确的。

概念上 L2 正则化是说我们将 在几何上 在每次训练迭代后将 W 缩小一些衰减。这样,如果 W 变得非常大,它会缩小更多。这样可以防止 W 中的单个值变得太大。

概念上 L1 正则化是说我们将 线性地 在每次训练迭代后将 W 降低某个常数(不交叉零。正数减少到零但不低于。负数增加到零但不高于。)这会将非常小的 W 值归零,只留下做出重大贡献的值。

你的第二个等式

self.weights = [(1-eta*(lmbda/n))*w-(eta)*nw
                for w, nw in zip(self.weights, avg_nw)]

未实现原始减法,但在 (1-eta*(lmbda/n))*w.

中仍有乘法(几何缩放)

实现一些函数 reduceLinearlyToZero 接受 w 和 eta*(lmbda/n) 和 returns max( abs( w - eta*(lmbda/n) ) , 0 ) * ( 1.0 如果 w >= 0 否则 -1.0 )

def reduceLinearlyToZero(w,eta,lmbda,n) :
    return max( abs( w - eta*(lmbda/n) ) , 0 ) * ( 1.0 if w >= 0 else -1.0 )


self.weights = [ reduceLinearlyToZero(w,eta,lmbda,n)-(eta/len(mini_batch))*nw
                for w, nw in zip(self.weights, avg_nw)]

或可能

self.weights = [ reduceLinearlyToZero(w-(eta/len(mini_batch))*nw,eta,lmbda,n)
                for w, nw in zip(self.weights, avg_nw)]