微信公众号搜"智元新知"关注
微信扫一扫可直接关注哦!

运行时错误:/pytorch/aten/src/THCUNN/generic/ClassNLLCriterion.cu:15____ 不支持多目标

如何解决运行时错误:/pytorch/aten/src/THCUNN/generic/ClassNLLCriterion.cu:15____ 不支持多目标

我面临那个错误 运行时错误:/pytorch/aten/src/THCUNN/generic/ClassNLLCriterion.cu:15____

不支持多目标

我的输入是 340 的二进制向量,目标是 8 的二进制向量,对于 '" loss = criterion(outputs,stat_batch),我得到 outputs.shape= [64,8] 和 stat_batch.shape=[64,8]

这是模型

class MMP(nn.Module):

    def __init__(self,M=1):
        super(MMP,self).__init__()
        # input layer
        self.layer1 = nn.Sequential(
            nn.Conv1d(340,256,kernel_size=1,stride=1,padding=0),nn.ReLU())
        self.layer2 = nn.Sequential(
            nn.Conv1d(256,128,nn.ReLU())
        self.layer3 = nn.Sequential(
            nn.Conv1d(128,64,nn.ReLU())
        self.drop1 = nn.Sequential(nn.Dropout())
        self.batch1 = nn.Batchnorm1d(128)
        # LSTM
        self.lstm1=nn.Sequential(nn.LSTM(
        input_size=64,hidden_size=128,num_layers=2,bidirectional=True,batch_first= True))
        self.fc1 = nn.Linear(128*2,8)
        self.sof = nn.softmax(dim=-1)

    def forward(self,x):
        out = self.layer1(x)
        out = self.layer2(out)
        out = self.layer3(out)
        out = self.drop1(out)
        out = out.squeeze()
        out = out.unsqueeze(0)
        #out = out.batch1(out)
        out,_ = self.lstm1(out)
        print("lstm",out.shape)
        out = self.fc1(out)
        out =out.squeeze()
        #out = out.squeeze()
        out = self.sof(out)
        return out

#traiin_model
criterion = nn.CrossEntropyLoss()
if CUDA:
    criterion = criterion.cuda()
optimizer = optim.SGD(model.parameters(),lr=LEARNING_RATE,momentum=0.9)

for epoch in range(N_EPOCHES):
    tot_loss=0
    # Training
    for i,(seq_batch,stat_batch) in enumerate(training_generator):
        # Transfer to GPU
        seq_batch,stat_batch = seq_batch.to(device),stat_batch.to(device)
        print(i)
        print(seq_batch)
        print(stat_batch)
        optimizer.zero_grad()
        # Model computation
        seq_batch = seq_batch.unsqueeze(-1)
        outputs = model(seq_batch)
        if CUDA:
            loss = criterion(outputs,stat_batch).float().cuda()
        else:
            loss = criterion(outputs.view(-1),stat_batch.view(-1))
        print(f"Epoch: {epoch},number: {i},loss:{loss.item()}...\n\n")

        tot_loss += loss.item(print(f"Epoch: {epoch},file_number: {i},loss:{loss.item()}...\n\n"))
        loss.backward()
        optimizer.step()

解决方法

您的目标 stat_batch 必须具有 (64,) 的形状,因为 nn.CrossEntropyLoss 接受类索引,不是单热编码。

要么适当地构建您的标签张量,要么改用 stat_batch.argmax(axis=1)

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。