如何使用 Tensorboard 绘制损失和准确度

如何解决如何使用 Tensorboard 绘制损失和准确度

我有三个数据集(训练、测试和验证)。我结合训练数据集和测试数据集来做 k 折交叉验证。我没有使用验证数据集。我是上一个问题的张量板的新手,我能够在每个时期的训练期间进行绘图损失准确性。我如何绘制损失和准确性的图,以便在每个时期进行测试。因为我想看看每个时期的表现。我应该为 set 使用验证集吗?如果是,如何使用?

# Prepare dataset by concatenating Train/Test part; we split later.
training_set = CustomDataset('one_hot_train_data.txt','train_3states_target.txt') #training_set = CustomDataset_3('one_hot_train_data.txt','train_5_target.txt')
training_generator = torch.utils.data.DataLoader(training_set,**params)
val_set = CustomDataset('one_hot_val_data.txt','val_3states_target.txt')
test_set = CustomDataset('one_hot_test_data.txt','test_3states_target.txt')
testloader_ = torch.utils.data.DataLoader(test_set,**params)
dataset = ConcatDataset([training_set,test_set])
kfold = KFold(n_splits=k_folds,shuffle=True)

# Start print
print('--------------------------------')

# K-fold Cross Validation model evaluation
for fold,(train_ids,test_ids) in enumerate(kfold.split(dataset)):
    # Print
    print(f'FOLD {fold}')
    print('--------------------------------')

    # Sample elements randomly from a given list of ids,no replacement.
    train_subsampler = torch.utils.data.SubsetRandomSampler(train_ids)
    test_subsampler = torch.utils.data.SubsetRandomSampler(test_ids)

    # Define data loaders for training and testing data in this fold
    trainloader = torch.utils.data.DataLoader(
        dataset,**params,sampler=train_subsampler)
    testloader = torch.utils.data.DataLoader(
       dataset,sampler=test_subsampler)
    # Init the neural network
    model = PPS()
    model.to(device)
    # Initialize optimizer
    optimizer = optim.SGD(model.parameters(),lr=LEARNING_RATE)
    # Run the training loop for defined number of epochs
    for epoch in range(0,N_EPOCHES):
        # Print epoch
        print(f'Starting epoch {epoch + 1}')
        # Set current loss value
        running_loss = 0.0
        epoch_loss = 0.0
        a = []
        # Iterate over the DataLoader for training data
        for i,data in enumerate(trainloader,0):
            inputs,targets = data
            inputs = inputs.unsqueeze(-1)
            #inputs = inputs.to(device)
            targets = targets.to(device)
            inputs = inputs.to(device)
            # print(inputs.shape,targets.shape)
            # Zero the gradients
            optimizer.zero_grad()
            # Perform forward pass
            loss,outputs = model(inputs,targets)
            outputs = outputs.to(device)

            # Perform backward pass
            loss.backward()
            # Perform optimization
            optimizer.step()
            # print statistics
            running_loss += loss.item()
            epoch_loss += loss
            a.append(torch.sum(outputs == targets))
            # print(outputs.shape,outputs.shape[0])

            if i % 2000 == 1999:  # print every 2000 mini-batches
                print('[%d,%5d] loss: %.3f' %
                      (epoch + 1,i + 1,running_loss / 2000),"acc",torch.sum(outputs == targets) / float(outputs.shape[0]))
                running_loss = 0.0
            # sum_acc += (outputs == stat_batch.argmax(1)).float().sum()
        print("epoch",epoch + 1,sum(a) / len(train_subsampler),"loss",epoch_loss / len(trainloader))
        accuracy = 100 * sum(a) / len(training_set)
        avg_loss = sum(a) / len(training_set)
        writer.add_scalar('train/loss',avg_loss.item(),epoch)
        writer.add_scalar('accuracy/loss',accuracy,epoch)
    state = {'epoch': epoch + 1,'state_dict': model.state_dict(),'optimizer': optimizer.state_dict() }
    torch.save(state,path + name_file + "model_epoch_i_" + str(epoch) + str(fold)+".cnn")
    #torch.save(model.state_dict(),path + name_file + "model_epoch_i_" + str(epoch) + ".cnn")
    # Print about testing
    print('Starting testing')

# Evaluation for this fold
    correct,total = 0,0
    with torch.no_grad():
    # Iterate over the test data and generate predictions
     for i,data in enumerate(testloader,0):
        # Get inputs
        inputs,targets = data
        #targets = targets.to(device)
        inputs = inputs.unsqueeze(-1)
        inputs = inputs.to(device)
        # Generate outputs
        loss,targets)
        outputs.to(device)
        print("out",outputs.shape)
        print("target",targets.shape)
        print("targetsize",targets.size(0))
        print("sum",(outputs == targets).sum().item())
        #print("sum",torch.sum(outputs == targets))

        # Set total and correct
       # _,predicted = torch.max(outputs.data,1)
        total += targets.size(0)
        correct += (outputs == targets).sum().item()
        #correct += torch.sum(outputs == targets)

    # Print accuracy
    print('Accuracy for fold %d: %d %%' % (fold,float( 100.0 * float(correct / total))))
    print('--------------------------------')
    results[fold] = 100.0 * float(correct / total)

# Print fold results
print(f'K-FOLD CROSS VALIDATION RESULTS FOR {k_folds} FOLDS')
print('--------------------------------')
sum = 0.0
for key,value in results.items():
    print(f'Fold {key}: {value} %')
    sum += value
print(f'Average: {float(sum / len(results.items()))} %')

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。

相关推荐


Selenium Web驱动程序和Java。元素在(x,y)点处不可单击。其他元素将获得点击?
Python-如何使用点“。” 访问字典成员?
Java 字符串是不可变的。到底是什么意思?
Java中的“ final”关键字如何工作?(我仍然可以修改对象。)
“loop:”在Java代码中。这是什么,为什么要编译?
java.lang.ClassNotFoundException:sun.jdbc.odbc.JdbcOdbcDriver发生异常。为什么?
这是用Java进行XML解析的最佳库。
Java的PriorityQueue的内置迭代器不会以任何特定顺序遍历数据结构。为什么?
如何在Java中聆听按键时移动图像。
Java“Program to an interface”。这是什么意思?
Java在半透明框架/面板/组件上重新绘画。
Java“ Class.forName()”和“ Class.forName()。newInstance()”之间有什么区别?
在此环境中不提供编译器。也许是在JRE而不是JDK上运行?
Java用相同的方法在一个类中实现两个接口。哪种接口方法被覆盖?
Java 什么是Runtime.getRuntime()。totalMemory()和freeMemory()?
java.library.path中的java.lang.UnsatisfiedLinkError否*****。dll
JavaFX“位置是必需的。” 即使在同一包装中
Java 导入两个具有相同名称的类。怎么处理?
Java 是否应该在HttpServletResponse.getOutputStream()/。getWriter()上调用.close()?
Java RegEx元字符(。)和普通点?