微信公众号搜"智元新知"关注
微信扫一扫可直接关注哦!

label.data:无法将 cuda:0 设备类型张量转换为 numpy首先使用 Tensor.cpu() 将张量复制到主机内存

如何解决label.data:无法将 cuda:0 设备类型张量转换为 numpy首先使用 Tensor.cpu() 将张量复制到主机内存

我创建了一个函数,用于在 pyTorch 中训练模型以将图片分类为占位符图像和产品图像。现在我正在尝试获取 f1_score 并将这些行添加代码中:

# !!!THIS LINE SHOULD OBTAIN F1_score!!!!   
f1score = f1_score(labels.data,preds)

添加后,我收到错误

can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.

在这里您可以看到完整的功能,并且应该很容易找到引用的行,正如我在 Capslock 中突出显示的那样:

def train_model(model,DataLoaders,criterion,optimizer,num_epochs=25,is_inception=False):
    since = time.time()
    print("model is : ",model)

    val_acc_history = []
    val_loss_history = []
    train_acc_history = []
    train_loss_history = []
    best_model_wts = copy.deepcopy(model.state_dict())
    best_acc = 0.0

    for epoch in range(num_epochs):
        print('Epoch {}/{}'.format(epoch,num_epochs - 1))
        print('-' * 10)


        # Each epoch has a training and validation phase
        for phase in ['train','val']:
            if phase == 'train':
                model.train()  # Set model to training mode
            else:
                model.eval()   # Set model to evaluate mode

            running_loss = 0.0
            running_corrects = 0

            # Iterate over data.
            for inputs,labels in DataLoaders[phase]:
                inputs = inputs.to(device)
                labels = labels.to(device)

                # zero the parameter gradients (This can be changed to the Adam and other optimizers)
                optimizer.zero_grad()

                # forward
                # track history if only in train
                with torch.set_grad_enabled(phase == 'train'):
                    # Get model outputs and calculate loss
                    # Special case for inception because in training it has an auxiliary output. In train
                    #   mode we calculate the loss by summing the final output and the auxiliary output
                    #   but in testing we only consider the final output.
                    if is_inception and phase == 'train':
                        # From https://discuss.pytorch.org/t/how-to-optimize-inception-model-with-auxiliary-classifiers/7958
                        outputs,aux_outputs = model(inputs)
                        loss1 = criterion(outputs,labels)
                        loss2 = criterion(aux_outputs,labels)
                        loss = loss1 + 0.4*loss2
                    else:
                        outputs = model(inputs)
                        loss = criterion(outputs,labels)

                    _,preds = torch.max(outputs,1)

                    # backward + optimize only if in training phase
                    if phase == 'train':
                        loss.backward()
                        optimizer.step()

                # statistics
                running_loss += loss.item() * inputs.size(0)
                running_corrects += torch.sum(preds == labels.data)
                
                # !!!THIS LINE SHOULD OBTAIN F1_score!!!!   
                f1score = f1_score(labels.data,preds)
                

            epoch_loss = running_loss / len(DataLoaders[phase].dataset)
            epoch_acc = running_corrects.double() / len(DataLoaders[phase].dataset)

            print('{} Loss: {:.4f} Acc: {:.4f}'.format(phase,epoch_loss,epoch_acc))


            # deep copy the model
            if phase == 'val' and epoch_acc > best_acc:
                best_acc = epoch_acc
                best_model_wts = copy.deepcopy(model.state_dict())
            if phase == 'val':
                val_acc_history.append(epoch_acc)
                val_loss_history.append(epoch_loss)
            if phase == 'train':
                train_acc_history.append(epoch_acc)
                train_loss_history.append(epoch_loss)

        print()

    time_elapsed = time.time() - since
    print('Training complete in {:.0f}m {:.0f}s'.format(time_elapsed // 60,time_elapsed % 60))
    print('Best val Acc: {:4f}'.format(best_acc))

    # load best model weights
    model.load_state_dict(best_model_wts)
    return model,val_acc_history,train_acc_history,val_loss_history,train_loss_history

我已经试过了,但这也不起作用:

# !!!THIS LINE SHOULD OBTAIN F1_score!!!!   
f1score = f1_score(labels.cpu().data,preds)

解决方法

我自己遇到了错误,我第一次尝试解决它几乎是正确的,但我还必须将 preds 添加到 # !!!THIS LINE SHOULD OBTAIN F1_SCORE!!!! f1score = f1_score(labels.cpu().data,preds.cpu())

{{1}}

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。