如何解决有没有更好的方法来计算多任务 DNN 建模的损失?
假设多任务深度学习中有超过一千个任务。超过一千列标签。在这种情况下,每个任务(列)都有特定的权重。使用以下代码片段循环每个任务以计算损失总和需要很长时间。
criterion = nn.MSELoss()
outputs = model(inputs)
loss = torch.tensor(0.0).to(device)
for j,w in enumerate(weights):
# mask keeping labeled molecules for each task
mask = labels[:,j] >= 0.0
if len(labels[:,j][mask]) != 0:
# the loss is the sum of each task/target loss.
# there are labeled samples for this task,so we add it's loss
loss += criterion(outputs[j][mask],labels[:,j][mask].view(-1,1)) * w
这个数据集很小。数据集有 10K 行和 1024 列,标签是一个 10K * 160 的稀疏矩阵。这 160 列中的每一列都是一项任务。批量大小为 32。以下是输出、标签、权重的形状:
len(outputs[0]),len(outputs)
(32,160)
weights.shape
torch.Size([160])
labels.shape
torch.Size([32,160])
但我真正想尝试的是一个数据集,它有超过 100 万行和 1024 个特征以及超过 1 万个标签。标签当然是稀疏的。
**update**
Thanks for you suggestions and code,Shai. I modified the code a little bit as follows,but the loss was the same as your code.
all_out = torch.cat(outputs).view(len(outputs),-1).T
all_mask = labels != -100.0
err = (all_out - labels) ** 2 # raw L2
err = all_mask * err # mask only the relevant entries in the err
mask_nums = all_mask.sum(axis=0)
err = err * weights[None,:] # weight each task
err = err / mask_nums[None,:]
err[err != err] = torch.tensor([0.0],requires_grad=True).to(device) # replace nan to 0.0
loss = err.sum()
A newly raised question is the loss can't get back propagated. Only the loss of the first batch was calculated. The following batches got a loss of 0.0.
Epoch: [1/20],Step: [1/316],Loss: 4.702103614807129
Epoch: [1/20],Step: [2/316],Loss: 0.0
Epoch: [1/20],Step: [3/316],Step: [4/316],Step: [5/316],Loss: 0.0
The loss was 0 and outputs was 32* 160 of nan after the first batch.
解决方法
您的损失与:
all_out = torch.cat([o_[:,None] for o_ in outputs],dim=1) # all_out has shape 32x160
all_mask = labels >= 0
err = (all_out - labels) ** 2 # raw L2
err = all_mask * err # mask only the relevant entries in the err
err = err * weights[None,:] # weight each task
err = err.sum()
这里的总和可能存在一个小问题 - 您可能需要根据 1
的每列中的 all_mask
数量进行加权。
谢谢,Shai。我终于弄明白了。这是自定义功能运行良好。我正在做回归,在这种情况下 -100 用于掩码。
def MSELoss2(outputs,labels,weights):
#This one works perfectly
all_out = torch.cat(outputs).view(len(outputs),-1).T
all_mask = labels != -100.0
mask_nums = all_mask.sum(axis=0)
err = (all_out - labels) ** 2 # raw L2
err = err * weights[None,:] # weight each task
err = err / mask_nums[None,:]
return torch.sum(err[all_mask])
版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。