在哪个维度上进行批量归一化?

Batchnormalization over which dimension?

我们计算哪个维度的均值和标准差?是针对 NN 层的隐藏维度,还是针对每个隐藏维度分别针对批次中的所有样本?

论文中说我们对批次进行归一化。

torch.nn.BatchNorm1d 中,但是输入参数是 num_features,这对我来说毫无意义。

为什么pytorch没有按照原论文的Batchnormalization进行?

over which dimension do we calculate the mean and std?

0 个维度,对于形状 (batch, num_features)1D 输入,它将是:

batch = 64
features = 12
data = torch.randn(batch, features)

mean = torch.mean(data, dim=0)
var = torch.var(data, dim=0)

In torch.nn.BatchNorm1d hower the input argument is "num_features", which makes no sense to me.

它与归一化无关,但通过gammabeta可学习参数meanvar进行重新参数化。来自论文:

scale 和 shift phase 中使用的两个参数都是 num_features 形状的,因此我们必须传递这个值才能用特定的形状初始化它们。

下面是一个从头实现的例子,供参考:

class BatchNorm1d(torch.nn.Module):
    def __init__(self, num_features, momentum: float = 0.9, eps: float = 1e-7):
        super().__init__()
        self.num_features = num_features

        self.gamma = torch.nn.Parameter(torch.ones(1, self.num_features))
        self.beta = torch.nn.Parameter(torch.zeros(1, self.num_features))
        
        self.register_buffer("running_mean", torch.ones(1, self.num_features))
        self.register_buffer("running_var", torch.ones(1, self.num_features))

        self.momentum = momentum
        self.eps = eps

    def forward(self, X):
        if not self.training:
            X_hat = X - self.running_mean / torch.sqrt(self.running_var + self.eps)
        else:
            mean = X.mean(dim=0).unsqueeze(dim=0)
            var = ((X - mean) ** 2).mean(dim=0).unsqueeze(dim=0)

            # Update running mean and variance
            self.running_mean *= self.momentum
            self.running_mean += (1 - self.momentum) * mean

            self.running_var *= self.momentum
            self.running_var += (1 - self.momentum) * var

            X_hat = X - mean / torch.sqrt(var + self.eps)

        return X_hat * self.gamma + self.beta

Why does pytorch not follow the original paper on Batchnormalization?

一目了然