微信公众号搜"智元新知"关注
微信扫一扫可直接关注哦!

使用 pack_padded_sequence - pad_packed_sequence 时,训练准确度降低,损失增加

如何解决使用 pack_padded_sequence - pad_packed_sequence 时,训练准确度降低,损失增加

我正在尝试使用 pack_padded_sequence 和 pad_packed_sequence 训练双向 lstm,但精度不断降低,而损失却在增加

这是我的数据加载器:

X1 (X[0]): tensor([[1408,1413,43,...,0],[1452,1415,2443,[1434,1432,2012,[1408,3593,1431,1402,[1420,1474,2645,0]]),shape: torch.Size([64,31])

len_X1 (X[3]): [9,19,12,7,15,4,13,9,8,14,23,10,11,31,20,17,29,5,6,16,12]

X2 (X[1]): tensor([[1420,51,2376,1523,2770,35],1428,2950,[1474,3464,42])

len_X2 (X[4]): [14,42,18,21,30,20]

t (X[2]): tensor([0,1,1]),shape: torch.Size([64])

这是我的模型类:

class BiLSTM(nn.Module):
def __init__(self,n_vocabs,embed_dims,n_lstm_units,n_lstm_layers,n_output_classes):
    super(BiLSTM,self).__init__()
    self.v = n_vocabs
    self.e = embed_dims
    self.u = n_lstm_units
    self.l = n_lstm_layers
    self.o = n_output_classes
    self.padd_idx = tokenizer.get_vocab()['[PAD]']
    self.embed = nn.Embedding(
        self.v,self.e,self.padd_idx
        )
    self.bilstm = nn.LSTM(
        self.e,self.u,self.l,batch_first = True,bidirectional = True,dropout = 0.5
        )
    self.linear = nn.Linear(
        self.u * 4,self.o
        )  
  
  def forward(self,X):
    # initial_hidden
    h0 = torch.zeros(self.l * 2,X[0].size(0),self.u).to(device)
    c0 = torch.zeros(self.l * 2,self.u).to(device)
    
    # embedding
    out1 = self.embed(X[0].to(device))
    out2 = self.embed(X[1].to(device))

    # # pack_padded_sequence
    out1 = nn.utils.rnn.pack_padded_sequence(out1,X[3],batch_first=True,enforce_sorted=False)
    out2 = nn.utils.rnn.pack_padded_sequence(out2,X[4],enforce_sorted=False)
    
    # NxTxh,lxNxh
    out1,_ = self.bilstm(out1,(h0,c0))
    out2,_ = self.bilstm(out2,c0))
    
    # # pad_packed_sequence
    out1,_ = nn.utils.rnn.pad_packed_sequence(out1,batch_first=True)
    out2,_ = nn.utils.rnn.pad_packed_sequence(out2,batch_first=True)

    # take only the final time step
    out1 = out1[:,-1,:]
    out2 = out2[:,:]
    
    # concatenate out1&2
    out = torch.cat((out1,out2),1)
    
    # linear layer
    out = self.linear(out)

    IoUt = torch.max(out,1)[1]
    return IoUt,out

如果我删除 pack_padded_sequence - pad_packed_sequence,模型训练效果很好:

class BiLSTM(nn.Module):
def __init__(self,self.u).to(device)
    
    # embedding
    out1 = self.embed(X[0].to(device))
    out2 = self.embed(X[1].to(device))

    # pack_padded_sequence
    # out1 = nn.utils.rnn.pack_padded_sequence(out1,enforce_sorted=False)
    # out2 = nn.utils.rnn.pack_padded_sequence(out2,c0))
    
    # pad_packed_sequence
    # out1,batch_first=True)
    # out2,out

解决方法

你的这几行代码是错误的。

# take only the final time step
out1 = out1[:,-1,:]
out2 = out2[:,:]

您说您正在执行最后一步,但您忘记了每个序列的长度不同。

nn.utils.rnn.pad_packed_sequence填充每个序列的输出,直到它的长度等于最长的长度,以便它们都具有相同的长度。

换句话说,您正在为大多数序列切出零向量(填充)。

这应该可以满足您的需求。

# take only the final time step
out1 = out1[range(out1.shape[0]),X3 - 1,:]
out2 = out2[range(out2.shape[0]),X4 - 1,:]

这里假设 X3X4张量

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。