为什么我的 L1- 正则化实现性能不佳?

Why does my implementation of L1- regularization give poor performance?

我正在学习有关神经网络的在线教程,neuralnetworksanddeeplearning.com The writer, Nielsen, implemented L2-regularization in the code as a part of this tutorial. Now he asks us to modify the code in such a way that it uses L1-regularization instead of L2. This link 将带您直接进入我正在谈论的教程部分。

使用随机梯度下降的L2正则化权重更新规则如下:

Nielsen 在 python 中是这样实现的:

self.weights = [(1-eta*(lmbda/n))*w-(eta/len(mini_batch))*nw
                for w, nw in zip(self.weights, nabla_w)]

具有 L1 正则化的更新规则变为:

我尝试按如下方式实现它:

self.weights = [(w - eta* (lmbda/len(mini_batch)) * np.sign(w) - (eta/len(mini_batch)) * nw)
                 for w, nw in zip(self.weights, nabla_w)]        

突然间,我的神经网络的分类准确率为 +- 机会...怎么会这样?我在实施 L1 正则化时是否犯了错误?我有一个包含 30 个隐藏神经元的神经网络,学习率为 0.5,lambda = 5.0。当我使用 L2 正则化时,一切都很好。

为方便起见,请在此处找到完整的更新功能:

def update_mini_batch(self, mini_batch, eta, lmbda, n):
    """Update the network's weights and biases by applying gradient
    descent using backpropagation to a single mini batch.  The
    ``mini_batch`` is a list of tuples ``(x, y)``, ``eta`` is the
    learning rate, ``lmbda`` is the regularization parameter, and
    ``n`` is the total size of the training data set.

    """
    nabla_b = [np.zeros(b.shape) for b in self.biases]
    nabla_w = [np.zeros(w.shape) for w in self.weights]
    for x, y in mini_batch:
        delta_nabla_b, delta_nabla_w = self.backprop(x, y)
        nabla_b = [nb+dnb for nb, dnb in zip(nabla_b, delta_nabla_b)]
        nabla_w = [nw+dnw for nw, dnw in zip(nabla_w, delta_nabla_w)]
    self.weights = [(1-eta*(lmbda/n))*w-(eta/len(mini_batch))*nw      
                    for w, nw in zip(self.weights, nabla_w)]
    self.biases = [b-(eta/len(mini_batch))*nb
                   for b, nb in zip(self.biases, nabla_b)]

你算错了。您要实现的公式的代码翻译为:

self.weights = [
    (w - eta * (lmbda / n) * np.sign(w) - eta * nabla_b[0])
    for w in self.weights]

两个必需的修改是:

  • 移除对小批量大小的依赖
  • 仅使用第一个 nabla 系数