微信公众号搜"智元新知"关注
微信扫一扫可直接关注哦!

ValueError:预期张量为大小C,H,W的张量图像得到了tensor.size= torch.Size[8,8]

如何解决ValueError:预期张量为大小C,H,W的张量图像得到了tensor.size= torch.Size[8,8]

我正在尝试规范化我的目标(地标),其中每个图像都有4个地标,每个地标(关键点)的x和y值。这里的批处理大小为8。

network = Network()
network.cuda()    

criterion = nn.MSELoss()
optimizer = optim.Adam(network.parameters(),lr=0.0001)

loss_min = np.inf
num_epochs = 1

start_time = time.time()
for epoch in range(1,num_epochs+1):
    
    loss_train = 0
    loss_test = 0
    running_loss = 0
    
    
    network.train()
    print('size of train loader is: ',len(train_loader))

    for step in range(1,len(train_loader)+1):

        
        batch = next(iter(train_loader))
        images,landmarks = batch['image'],batch['landmarks']
        #RuntimeError: Given groups=1,weight of size [64,3,7,7],expected input[64,600,800,3] to have 3 channels,but got 600 channels instead
        #using permute below to fix the above error
        images = images.permute(0,1,2)
        
        images = images.cuda()
    
        landmarks = landmarks.view(landmarks.size(0),-1).cuda() 
    
        norm_image = transforms.normalize([0.3809,0.3810,0.3810],[0.1127,0.1129,0.1130]) 
        for image in images:
            image = image.float()
            ##image = to_tensor(image) #TypeError: pic should be PIL Image or ndarray. Got <class 'torch.Tensor'>
            image = norm_image(image)
        
        
        norm_landmarks = transforms.normalize(0.4949,0.2165)
        landmarks = norm_landmarks(landmarks)
        ##landmarks = torchvision.transforms.normalize(landmarks) #Do I need to normalize the target?
        
        predictions = network(images)
        
        # clear all the gradients before calculating them
        optimizer.zero_grad()
        
        print('predictions are: ',predictions.float())
        print('landmarks are: ',landmarks.float())
        # find the loss for the current step
        loss_train_step = criterion(predictions.float(),landmarks.float())
        
        
        loss_train_step = loss_train_step.to(torch.float32)
        print("loss_train_step before backward: ",loss_train_step)
        
        # calculate the gradients
        loss_train_step.backward()
        
        # update the parameters
        optimizer.step()
        
        print("loss_train_step after backward: ",loss_train_step)

        
        loss_train += loss_train_step.item()
        
        print("loss_train: ",loss_train)
        running_loss = loss_train/step
        print('step: ',step)
        print('running loss: ',running_loss)
        
        print_overwrite(step,len(train_loader),running_loss,'train')
        
    network.eval() 
    with torch.no_grad():
        
        for step in range(1,len(test_loader)+1):
            
            batch = next(iter(train_loader))
            images,batch['landmarks']
            images = images.permute(0,2)
            images = images.cuda()
            landmarks = landmarks.view(landmarks.size(0),-1).cuda()
        
            predictions = network(images)

            # find the loss for the current step
            loss_test_step = criterion(predictions,landmarks)

            loss_test += loss_test_step.item()
            running_loss = loss_test/step

            print_overwrite(step,len(test_loader),'Validation')
    
    loss_train /= len(train_loader)
    loss_test /= len(test_loader)
    
    print('\n--------------------------------------------------')
    print('Epoch: {}  Train Loss: {:.4f} Valid Loss: {:.4f}'.format(epoch,loss_train,loss_test))
    print('--------------------------------------------------')
    
    if loss_test < loss_min:
        loss_min = loss_test
        torch.save(network.state_dict(),'../moth_landmarks.pth') 
        print("\nMinimum Valid Loss of {:.4f} at epoch {}/{}".format(loss_min,epoch,num_epochs))
        print('Model Saved\n')
     
print('Training Complete')
print("Total Elapsed Time : {} s".format(time.time()-start_time))

但是我得到这个错误

size of train loader is:  90

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-13-3e4770ad7109> in <module>
     40 
     41         norm_landmarks = transforms.normalize(0.4949,0.2165)
---> 42         landmarks = norm_landmarks(landmarks)
     43         ##landmarks = torchvision.transforms.normalize(landmarks) #Do I need to normalize the target?
     44 

~/anaconda3/lib/python3.7/site-packages/torchvision/transforms/transforms.py in __call__(self,tensor)
    210             Tensor: normalized Tensor image.
    211         """
--> 212         return F.normalize(tensor,self.mean,self.std,self.inplace)
    213 
    214     def __repr__(self):

~/anaconda3/lib/python3.7/site-packages/torchvision/transforms/functional.py in normalize(tensor,mean,std,inplace)
    282     if tensor.ndimension() != 3:
    283         raise ValueError('Expected tensor to be a tensor image of size (C,H,W). Got tensor.size() = '
--> 284                          '{}.'.format(tensor.size()))
    285 
    286     if not inplace:

ValueError: Expected tensor to be a tensor image of size (C,W). Got tensor.size() = torch.Size([8,8]).

我应该如何规范我的地标?

解决方法

    norm_landmarks = transforms.Normalize(0.4949,0.2165)
    landmarks = landmarks.unsqueeze_(0)
    landmarks = norm_landmarks(landmarks)

添加

landmarks = landmarks.unsqueeze_(0)

解决了问题。

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。