pytorch L1-norm 剪枝是如何工作的?

How does pytorch L1-norm pruning works?

先看看我得到的结果吧。这是我模型的一个卷积层,我只显示了它的 11 个过滤器的权重(11 个 3x3 过滤器,通道 = 1)

Left side is original weight Right side is Pruned weight

所以我想知道“TORCH.NN.UTILS.PRUNE.L1_UNSTRUCTURED”是如何工作的,因为 by the pytorch website said, it prune the lowest L1-norm unit, but as far as I know, L1-norm pruning is a filter pruning method which prune the whole filter which use this equation 是为了优化最低过滤器值而不是修剪单个权重。所以我有点好奇这个功能究竟是如何工作的?

以下是我的剪枝代码

parameters_to_prune = (
    (model.input_layer[0], 'weight'),
    (model.hidden_layer1[0], 'weight'),
    (model.hidden_layer2[0], 'weight'),
    (model.output_layer[0], 'weight')
)

prune.global_unstructured(
    parameters_to_prune,
    pruning_method=prune.L1Unstructured,
    amount = (pruned_percentage/100),
)

nn.utils.prune.l1_unstructured 实用程序不会修剪整个过滤器,它会修剪单个参数组件,正如您在 sheet 中观察到的那样。那是具有较低规范的组件被屏蔽。


这是下面评论中讨论的最小示例:

>>> m = nn.Linear(10,1,bias=False)
>>> m.weight = nn.Parameter(torch.arange(10).float())
>>> prune.l1_unstructured(m, 'weight', .3)
>>> m.weight
tensor([0., 0., 0., 3., 4., 5., 6., 7., 8., 9.], grad_fn=<MulBackward0>)