sklearn 中的 logloss 和 Pytorch 中的 BCEloss 之间的区别?

Difference between logloss in sklearn and BCEloss in Pytorch?

查看 Sklearn 中 logloss 和 Pytorch 中 BCEloss 的文档,这些应该是相同的,即只是应用了权重的正常对数损失。然而,它们的行为不同——无论是否应用了权重。谁能给我解释一下?我找不到 BCEloss 的源代码(内部引用 binary_cross_entropy)。

input = torch.randn((3, 1), requires_grad=True)
target = torch.ones((3, 1), requires_grad=False)
w = torch.randn((3, 1), requires_grad=False)

# ----- With weights
w = F.sigmoid(w)
criterion_test = nn.BCELoss(weight=w)
print(criterion_test(input=F.sigmoid(input), target=F.sigmoid(target)))
print(log_loss(y_true=target.detach().numpy(), 
               y_pred=F.sigmoid(input).detach().numpy(), sample_weight=w.detach().numpy().reshape(-1), labels=np.array([0.,1.])))
print("")
print("")
# ----- Without weights
criterion_test = nn.BCELoss()
print(criterion_test(input=F.sigmoid(input),target=F.sigmoid(target)))
print(log_loss(y_true=target.detach().numpy(), 
               y_pred=F.sigmoid(input).detach().numpy(), labels=np.array([0.,1.])))

其实,我发现了。事实证明,BCELoss 和 log_loss 在权重总和超过输入数组的维度 时表现不同 。有趣的。

关于没有权重的计算,使用BCEWithLogitsLoss you get the same result as for sklearn.metrics.log_loss:

import torch
import torch.nn as nn
from sklearn.metrics import log_loss
import numpy as np

input = torch.randn((3, 1), requires_grad=True)
target = torch.ones((3, 1), requires_grad=False)

# ----- Without weights
criterion = torch.nn.BCEWithLogitsLoss()
criterion(input, target)
print('{:.6f}'.format(criterion(input, target)))
print('{:.6f}'.format((log_loss(y_true=target.detach().numpy(),
                                y_pred=torch.sigmoid(input).detach().numpy(),
                                labels=np.array([0.,1.])))))

注意:

This loss combines a Sigmoid layer and the BCELoss in one single class. This version is more numerically stable than using a plain Sigmoid followed by a BCELoss as, by combining the operations into one layer, we take advantage of the log-sum-exp trick for numerical stability.