PyTorch 中自定义后向函数的损失 - 简单 MSE 示例中的爆炸损失

Loss with custom backward function in PyTorch - exploding loss in simple MSE example

在处理更复杂的东西之前,我知道我必须实现自己的 backward pass,我想尝试一些简单的好东西。因此,我尝试使用 PyTorch 进行均方误差损失的线性回归。当我定义自己的 backward 方法时,这出错了(请参阅下面的第三个实现选项),我怀疑这是因为我没有非常清楚地考虑我需要将什么作为渐变发送 PyTorch。所以,我怀疑我需要的是一些 explanation/clarification/advice PyTorch 希望我在这里以什么形式提供的内容。

我使用的是 PyTorch 1.7.0,所以一堆旧示例不再有效(使用用户定义的 autograd 函数的不同方式,如 documentation 中所述)。

第一种方法(标准 PyTorch MSE 损失函数)

让我们先用没有自定义损失函数的标准方法来做:

import torch
import torch.nn as nn
import torch.nn.functional as F

# Let's generate some fake data
torch.manual_seed(42)
resid = torch.rand(100)    
inputs = torch.tensor([ [ xx ] for xx in range(100)] , dtype=torch.float32)
labels = torch.tensor([ (2 + 0.5*yy + resid[yy]) for yy in range(100)], dtype=torch.float32)

# Now we define a linear regression model
class linearRegression(torch.nn.Module):
    def __init__(self, inputSize, outputSize):
        super(linearRegression, self).__init__()
        self.bn = torch.nn.BatchNorm1d(num_features=1)
        self.linear = torch.nn.Linear(inputSize, outputSize)

    def forward(self, inx):
        x = self.bn(inx) # Adding BN to standardize input helps us use a higher learning rate
        x = self.linear(x)
        return x
    
model = linearRegression(1, 1)     

# Using the standard mse_loss of PyTorch
epochs = 25    
mseloss = F.mse_loss
optimizer = torch.optim.SGD(model.parameters(), lr=1e-2, momentum=1e-3)
scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=20, gamma=0.1)

for epoch in range(epochs):
    model.train()
    optimizer.zero_grad()
    outputs = model(inputs)
    loss = mseloss(outputs.view(-1), labels)
    loss.backward()
    optimizer.step()
    scheduler.step()        
    print(f'epoch {epoch}, loss {loss}')    
    

这列火车很好,我损失了大约 0.0824,拟合图看起来很好。

第二种方法(自定义损失函数,但依赖PyTorch的自动梯度计算)

所以,现在我用我自己实现的 MSE 损失替换了损失函数,但我仍然依赖 PyTorch autograd。我在这里唯一改变的是定义自定义损失函数,相应地定义基于它的损失,以及我如何将预测和真实标签移交给损失函数的次要细节。

#######################################################3
class MyMSELoss(nn.Module):
    
    def __init__(self):
        super(MyMSELoss, self).__init__()

    def forward(self, inputs, targets):        
        tmp = (inputs-targets)**2
        loss =  torch.mean(tmp)        
        return loss
#######################################################3

model = linearRegression(1, 1) 
    
mseloss = MyMSELoss()
optimizer = torch.optim.SGD(model.parameters(), lr=1e-2, momentum=1e-3)
scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=20, gamma=0.1)

for epoch in range(epochs):
    model.train()    
    outputs = model(inputs)
    loss = mseloss(outputs.view(-1), labels)
    loss.backward()    
    optimizer.step()
    optimizer.zero_grad()
    scheduler.step()
    
    print(f'epoch {epoch}, loss {loss}')

这给出了与使用标准 MSE 损失函数完全相同的结果。历元损失看起来像这样:

epoch 0, loss 884.2006225585938
epoch 1, loss 821.930908203125
epoch 2, loss 718.7732543945312
epoch 3, loss 538.1835327148438
epoch 4, loss 274.50909423828125
epoch 5, loss 55.115299224853516
epoch 6, loss 2.405021905899048
epoch 7, loss 0.47621214389801025
epoch 8, loss 0.1584305614233017
epoch 9, loss 0.09725229442119598
epoch 10, loss 0.0853077694773674
epoch 11, loss 0.08297089487314224
epoch 12, loss 0.08251354098320007
epoch 13, loss 0.08242412656545639
epoch 14, loss 0.08240655809640884
epoch 15, loss 0.08240310847759247
epoch 16, loss 0.08240246027708054
epoch 17, loss 0.08240233361721039
epoch 18, loss 0.08240240067243576
epoch 19, loss 0.08240223675966263
epoch 20, loss 0.08240225911140442
epoch 21, loss 0.08240220695734024
epoch 22, loss 0.08240220695734024
epoch 23, loss 0.08240220695734024
epoch 24, loss 0.08240220695734024

第三种方法(使用我自己的反向方法自定义损失函数)

现在是最终版本,我在其中为 MSE 实现了自己的梯度。为此,我在损失函数 class 中定义了自己的 backward 方法,显然需要做 mseloss = MyMSELoss.apply.

from torch.autograd import Function

#######################################################
class MyMSELoss(Function):
    
    @staticmethod
    def forward(ctx, y_pred, y):    
        ctx.save_for_backward(y_pred, y)
        return ( (y - y_pred)**2 ).mean()
    
    @staticmethod
    def backward(ctx, grad_output):
        y_pred, y = ctx.saved_tensors
        grad_input = torch.mean( -2.0 * (y - y_pred)).repeat(y_pred.shape[0])        
        # This fails, as does grad_input = -2.0 * (y-y_pred)
        # I've also messed around with the sign and that's not the sole problem, either.
        return grad_input, None
    
#######################################################
    
model = linearRegression(1, 1) 
mseloss = MyMSELoss.apply
optimizer = torch.optim.SGD(model.parameters(), lr=1e-2, momentum=1e-3)
scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=20, gamma=0.1)

for epoch in range(epochs):
    model.train()
    outputs = model(inputs)
    loss = mseloss(outputs.view(-1), labels)
    loss.backward()
    optimizer.step()
    optimizer.zero_grad()
    scheduler.step()
    print(f'epoch {epoch}, loss {loss}')    

这就是问题所在,训练损失没有减少,反而增加了。现在看起来像这样:

epoch 0, loss 884.2006225585938
epoch 1, loss 3471.384033203125
epoch 2, loss 47768555520.0
epoch 3, loss 1.7422577779621402e+33
epoch 4, loss inf
epoch 5, loss nan
epoch 6, loss nan
epoch 7, loss nan
epoch 8, loss nan
epoch 9, loss nan
epoch 10, loss nan
epoch 11, loss nan
epoch 12, loss nan
epoch 13, loss nan
epoch 14, loss nan
epoch 15, loss nan
epoch 16, loss nan
epoch 17, loss nan
epoch 18, loss nan
epoch 19, loss nan
epoch 20, loss nan
epoch 21, loss nan
epoch 22, loss nan
epoch 23, loss nan
epoch 24, loss nan

(2为常量可忽略)

所以把你的后向函数改成这样:

@staticmethod
def backward(ctx, grad_output):
    y_pred, y = ctx.saved_tensors
    grad_input = 2 * (y_pred - y) / y_pred.shape[0]        
    return grad_input, None