有没有更好的方法来计算多任务 DNN 建模的损失?
Is there a better way to calculate loss for multi-task DNN modeling?
假设多任务深度学习中有超过一千个任务。一千多列标签。在这种情况下,每个任务(列)都有特定的权重。使用以下代码片段循环遍历每个任务以计算损失总和将花费很长时间。
criterion = nn.MSELoss()
outputs = model(inputs)
loss = torch.tensor(0.0).to(device)
for j, w in enumerate(weights):
# mask keeping labeled molecules for each task
mask = labels[:, j] >= 0.0
if len(labels[:, j][mask]) != 0:
# the loss is the sum of each task/target loss.
# there are labeled samples for this task, so we add it's loss
loss += criterion(outputs[j][mask], labels[:, j][mask].view(-1, 1)) * w
这个数据集很小。数据集有 10K 行和 1024 列,标签是 10K * 160 稀疏矩阵。这 160 列中的每一列都是一项任务。批量大小为 32。以下是输出、标签、权重的形状:
len(outputs[0]), len(outputs)
(32, 160)
weights.shape
torch.Size([160])
labels.shape
torch.Size([32, 160])
但我真正想尝试的是一个拥有超过 100 万行和 1024 个特征以及超过 10K 个标签的数据集。标签当然是稀疏的。
**update**
Thanks for you suggestions and code, Shai. I modified the code a little bit as follows, but the loss was the same as your code.
all_out = torch.cat(outputs).view(len(outputs), -1).T
all_mask = labels != -100.0
err = (all_out - labels) ** 2 # raw L2
err = all_mask * err # mask only the relevant entries in the err
mask_nums = all_mask.sum(axis=0)
err = err * weights[None, :] # weight each task
err = err / mask_nums[None, :]
err[err != err] = torch.tensor([0.0], requires_grad=True).to(device) # replace nan to 0.0
loss = err.sum()
A newly raised question is the loss can't get back propagated. Only the loss of the first batch was calculated. The following batches got a loss of 0.0.
Epoch: [1/20], Step: [1/316], Loss: 4.702103614807129
Epoch: [1/20], Step: [2/316], Loss: 0.0
Epoch: [1/20], Step: [3/316], Loss: 0.0
Epoch: [1/20], Step: [4/316], Loss: 0.0
Epoch: [1/20], Step: [5/316], Loss: 0.0
The loss was 0 and outputs was 32* 160 of nan after the first batch.
您的损失与:
有何不同
all_out = torch.cat([o_[:, None] for o_ in outputs], dim=1) # all_out has shape 32x160
all_mask = labels >= 0
err = (all_out - labels) ** 2 # raw L2
err = all_mask * err # mask only the relevant entries in the err
err = err * weights[None, :] # weight each task
err = err.sum()
这里的求和可能有一个小问题 - 您可能需要根据 all_mask
.
的每一列中的 1
的数量来加权
谢谢,Shai。我终于弄明白了。这是自定义函数运行良好。我正在做回归,在这种情况下,-100 用于遮罩。
def MSELoss2(outputs, labels, weights):
#This one works perfectly
all_out = torch.cat(outputs).view(len(outputs), -1).T
all_mask = labels != -100.0
mask_nums = all_mask.sum(axis=0)
err = (all_out - labels) ** 2 # raw L2
err = err * weights[None, :] # weight each task
err = err / mask_nums[None, :]
return torch.sum(err[all_mask])
假设多任务深度学习中有超过一千个任务。一千多列标签。在这种情况下,每个任务(列)都有特定的权重。使用以下代码片段循环遍历每个任务以计算损失总和将花费很长时间。
criterion = nn.MSELoss()
outputs = model(inputs)
loss = torch.tensor(0.0).to(device)
for j, w in enumerate(weights):
# mask keeping labeled molecules for each task
mask = labels[:, j] >= 0.0
if len(labels[:, j][mask]) != 0:
# the loss is the sum of each task/target loss.
# there are labeled samples for this task, so we add it's loss
loss += criterion(outputs[j][mask], labels[:, j][mask].view(-1, 1)) * w
这个数据集很小。数据集有 10K 行和 1024 列,标签是 10K * 160 稀疏矩阵。这 160 列中的每一列都是一项任务。批量大小为 32。以下是输出、标签、权重的形状:
len(outputs[0]), len(outputs)
(32, 160)
weights.shape
torch.Size([160])
labels.shape
torch.Size([32, 160])
但我真正想尝试的是一个拥有超过 100 万行和 1024 个特征以及超过 10K 个标签的数据集。标签当然是稀疏的。
**update**
Thanks for you suggestions and code, Shai. I modified the code a little bit as follows, but the loss was the same as your code.
all_out = torch.cat(outputs).view(len(outputs), -1).T
all_mask = labels != -100.0
err = (all_out - labels) ** 2 # raw L2
err = all_mask * err # mask only the relevant entries in the err
mask_nums = all_mask.sum(axis=0)
err = err * weights[None, :] # weight each task
err = err / mask_nums[None, :]
err[err != err] = torch.tensor([0.0], requires_grad=True).to(device) # replace nan to 0.0
loss = err.sum()
A newly raised question is the loss can't get back propagated. Only the loss of the first batch was calculated. The following batches got a loss of 0.0.
Epoch: [1/20], Step: [1/316], Loss: 4.702103614807129
Epoch: [1/20], Step: [2/316], Loss: 0.0
Epoch: [1/20], Step: [3/316], Loss: 0.0
Epoch: [1/20], Step: [4/316], Loss: 0.0
Epoch: [1/20], Step: [5/316], Loss: 0.0
The loss was 0 and outputs was 32* 160 of nan after the first batch.
您的损失与:
有何不同all_out = torch.cat([o_[:, None] for o_ in outputs], dim=1) # all_out has shape 32x160
all_mask = labels >= 0
err = (all_out - labels) ** 2 # raw L2
err = all_mask * err # mask only the relevant entries in the err
err = err * weights[None, :] # weight each task
err = err.sum()
这里的求和可能有一个小问题 - 您可能需要根据 all_mask
.
1
的数量来加权
谢谢,Shai。我终于弄明白了。这是自定义函数运行良好。我正在做回归,在这种情况下,-100 用于遮罩。
def MSELoss2(outputs, labels, weights):
#This one works perfectly
all_out = torch.cat(outputs).view(len(outputs), -1).T
all_mask = labels != -100.0
mask_nums = all_mask.sum(axis=0)
err = (all_out - labels) ** 2 # raw L2
err = err * weights[None, :] # weight each task
err = err / mask_nums[None, :]
return torch.sum(err[all_mask])