微信公众号搜"智元新知"关注
微信扫一扫可直接关注哦!

我在使用 CNN 制作模型时遇到运行时错误

如何解决我在使用 CNN 制作模型时遇到运行时错误

我想用 Cnn 制作一个模型。但它总是显示张量大小错误。我在我的代码的任何部分都找不到这种大小!两个类参数没问题,当我将它更改为 4 时它会增加 这是我的代码

from __future__ import print_function,division

import torch
import torch.nn as nn
import torch.optim as optim
from torch.optim import lr_scheduler
import torchvision
from torchvision import datasets,models,transforms
import matplotlib.pyplot as plt
import numpy as np
import time
import os
import copy

import os
import shutil
import re

two_train =  "/Data/train/2"
two_val =  "/Data/val/2"
nine_train =  "/Data/train/9"
nine_val = "/Data/val/9"
seven_train =  "/Data/train/7"
seven_val = "/Data/val/7"
eight_train =  "/Data/train/8"
eight_val = "/Data/val/8"

two_files = os.listdir(two_train)
nine_files = os.listdir(nine_train)
seven_files = os.listdir(seven_train)
eight_files = os.listdir(eight_train)

# Make transforms and use data loaders

# We'll use these a lot,so make them variables
mean_nums = [0.485,0.456,0.406]
std_nums = [0.229,0.224,0.225]

chosen_transforms = {'train': transforms.Compose([
        transforms.ToTensor(),transforms.normalize(mean_nums,std_nums)
]),'val': transforms.Compose([
        transforms.ToTensor(),}

# Set the directory for the data
data_dir = '/content/drive/MyDrive/Data/'

# Use the image folder function to create datasets
chosen_datasets = {x: datasets.ImageFolder(os.path.join(data_dir,x),chosen_transforms[x])
                  for x in ['train','val']}

# Make iterables with the DataLoaders
DataLoaders = {x: torch.utils.data.DataLoader(chosen_datasets[x],batch_size=4,shuffle=True,num_workers=8)
              for x in ['train','val']}

dataset_sizes = {x: len(chosen_datasets[x]) for x in ['train','val']}
class_names = chosen_datasets['train'].classes

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

dataset_sizes
# class_names
# device
mean_nums

def imshow(inp,title=None):
    inp = inp.numpy().transpose((1,2,0))
    mean = np.array([mean_nums])
    std = np.array([std_nums])
    inp = std * inp + mean
    inp = np.clip(inp,1)
    plt.imshow(inp)
    if title is not None:
        plt.title(title)
    plt.pause(0.001)  # Pause a bit so that plots are updated


# Grab some of the training data to visualize
inputs,classes = next(iter(DataLoaders['train']))

我得到了这个错误

Caught RuntimeError in DataLoader worker process 0.
Original Traceback (most recent call last):
  File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/worker.py",line 198,in _worker_loop
    data = fetcher.fetch(index)
  File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/fetch.py",line 47,in fetch
    return self.collate_fn(data)
  File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/collate.py",line 83,in default_collate
    return [default_collate(samples) for samples in transposed]
  File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/collate.py",in <listcomp>
    return [default_collate(samples) for samples in transposed]
  File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/collate.py",line 55,in default_collate
    return torch.stack(batch,out=out)
RuntimeError: stack expects each tensor to be equal size,but got [3,65,36] at entry 0 and [3,61,39] at entry 

最后一行出现错误,并在控制台上显示错误! 我在网上搜索,但一无所获

解决方法

该错误意味着 PyTorch 正在尝试沿批次维度堆叠 2 个不同形状的张量。 这是因为数据集中的图像大小不同。

解决问题的一种方法是添加一个 Resize 变换,例如:

chosen_transforms = {'train': transforms.Compose([
        transforms.Resize((65,40)),transforms.ToTensor(),transforms.Normalize(mean_nums,std_nums)
]),'val': transforms.Compose([
        transforms.Resize((65,}

您可能还需要使用 PIL 强制加载图像:

# Use the image folder function to create datasets
chosen_datasets = {x: datasets.ImageFolder(os.path.join(data_dir,x),chosen_transforms[x],loader=torchvision.datasets.folder.pil_loader)
                  for x in ['train','val']}

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。