如何读取具有交叉熵损失的神经网络的预测标签?

How to read the predicted label of a Neural Network with Cross Entropy Loss?

我正在使用神经网络预测红酒数据集的质量,可在 UCI 机器学习上使用,使用 Pytorch 和交叉熵损失作为损失函数。

这是我的代码:

input_size = len(input_columns)
hidden_size = 12
output_size = 6 #because there are 6 classes

#Loss function
loss_fn = F.cross_entropy

class WineQuality(nn.Module):
    def __init__(self):
        super().__init__()
        # input to hidden layer
        self.linear1 = nn.Linear(input_size, hidden_size)
        # hidden layer and output
        self.linear2 = nn.Linear(hidden_size, output_size)
        
    def forward(self, xb): 
        out = self.linear1(xb)
        out = F.relu(out)
        out = self.linear2(out)
        return out
    
    def training_step(self, batch):
        inputs, targets = batch 
        # Generate predictions
        out = self(inputs) 
        # Calcuate loss
        loss = loss_fn(out,torch.argmax(targets, dim=1))
        return loss
    
    def validation_step(self, batch):
        inputs, targets = batch
        # Generate predictions
        out = self(inputs)
        # Calculate loss
        loss = loss_fn(out, torch.argmax(targets, dim=1))
        return {'val_loss': loss.detach()}
        
    def validation_epoch_end(self, outputs):
        batch_losses = [x['val_loss'] for x in outputs]
        epoch_loss = torch.stack(batch_losses).mean()   # Combine losses
        return {'val_loss': epoch_loss.item()}
    
    def epoch_end(self, epoch, result, num_epochs):
        # Print result every 100th epoch
        if (epoch+1) % 100 == 0 or epoch == num_epochs-1:
            print("Epoch [{}], val_loss: {:.4f}".format(epoch+1, result['val_loss']))

model = WineQuality()

def evaluate(model, val_loader):
    outputs = [model.validation_step(batch) for batch in val_loader]
    return model.validation_epoch_end(outputs)

def fit(epochs, lr, model, train_loader, val_loader, opt_func=torch.optim.SGD):
    history = []
    optimizer = opt_func(model.parameters(), lr)
    for epoch in range(epochs):
        # Training Phase 
        for batch in train_loader:
            loss = model.training_step(batch)
            loss.backward()
            optimizer.step()
            optimizer.zero_grad()
        # Validation phase
        result = evaluate(model, val_loader)
        model.epoch_end(epoch, result, epochs)
        history.append(result)
    return history

loss_value = evaluate(model, valid_dl)

#model=WineQuality()
epochs = 1000
lr = 1e-5
history = fit(epochs, lr, model, train_loader, val_loader)

我可以看到模型很好并且损失减少了。问题是当我必须对一个例子进行预测时:

def predict_single(input, target, model):
    inputs = input.unsqueeze(0)
    predictions = model(inputs)
    prediction = predictions[0].detach()
    print("Input:", input)
    print("Target:", target)
    print("Prediction:", prediction)
    return prediction

input, target = val_df[1]
prediction = predict_single(input, target, model)

这个returns:

Input: tensor([0.8705, 0.3900, 2.1000, 0.0650, 4.1206, 3.3000, 0.5300, 0.2610])
Target: tensor([6.])
Prediction: tensor([ 3.6465,  0.2800, -0.4561, -1.6733, -0.6519, -0.1650])

我想看看这些 logits 有什么关联,因为我知道最高的 logits 与预测的 class 相关联,但我想看看 class。我还应用 softmax 以概率重新调整这些值:

prediction = F.softmax(prediction)
print(prediction)
output = model(input.unsqueeze(0))
_,pred = output.max(1)
print(pred)

输出如下:

tensor([0.3296, 0.1361, 0.1339, 0.1324, 0.1335, 0.1346])
tensor([0])

我不知道那个张量是什么([0])。我希望我的预测标签,如果目标为 6,则为 6.1 之类的值。但我无法获得此值。

首先,让我们回顾一下您计算损失的方式。来自您的代码:

loss = loss_fn(out,torch.argmax(targets, dim=1))

您正在使用 torch.argmax 函数,在您的情况下,该函数期望 targets 大小为 torch.Size([num_samples, num_classes])torch.Size([32, 6])。您确定您的训练标签与此尺寸兼容吗?从您的文章中我了解到您正在阅读标签 class 作为数字(从 3 到 8)。所以,它的大小是torch.Size([32, 1])。因此,当您使用训练数据调用 torch.argmax 时,“torch.argmax”始终返回 0。

这就是模型学习预测 class 0 的原因,无论输入是什么。

现在,由于您的 class 标签(用于训练)是从 3 到 8。不幸的是,如果我们将这些标签与您的 loss_fntorch.nn.CrossEntropyLoss() 一起使用,它将与总共 9 个标签,(class0 到 class8)因为最大 class 标签是 8。因此,您需要将 3 转换为 8 -> 0 到 5。对于损失计算,使用:

loss = loss_fn(out, targets - 3)