如何解决维度超出范围预计在 [-4, 3] 范围内,但得到 64
我是 Pytorch 的新手,我一直致力于使用 MNIST 数据集训练 MLP 模型。基本上,我将图像和标签作为输入提供给模型,并在其上训练数据集。我使用 CrossEntropyLoss() 作为损失函数,但是每次运行模型时都会出现维度错误。
IndexError Traceback (most recent call last)
<ipython-input-37-04f8cfc1d3b6> in <module>()
47
48 # Forward
---> 49 outputs = model(images)
50
5 frames
/usr/local/lib/python3.7/dist-packages/torch/nn/modules/flatten.py in forward(self,input)
38
39 def forward(self,input: Tensor) -> Tensor:
---> 40 return input.flatten(self.start_dim,self.end_dim)
41
42 def extra_repr(self) -> str:
IndexError: Dimension out of range (expected to be in range of [-4,3],but got 64)
这是我创建的 MLP 类
class MLP(nn.Module):
def __init__(self,device,input_size = 1*28*28,output_size = 10):
super().__init__()
self.seq = nn.Sequential(nn.Flatten(BATCH=64,input_size),nn.Linear(input_size,32),nn.ReLU(),nn.Linear(32,output_size))
self.to(device)
def forward(self,x):
return self.seq(x)
其余的训练模型是
from tqdm.notebook import tqdm
from datetime import datetime
from torch.utils.tensorboard import SummaryWriter
import torch.optim as optim
exp_name = "MLP version 1"
# log_name = "logs/" + exp_name + f" {datetime.now()}"
# print("Tensorboard logs will be written to:",log_name)
# writer = SummaryWriter(log_name)
criterion = nn.CrossEntropyLoss()
model = MLP(device)
optimizer = torch.optim.Adam(model.parameters(),lr = 0.0001)
num_epochs = 10
for epoch in tqdm(range(num_epochs)):
epoch_train_loss = 0.0
epoch_accuracy = 0.0
for data in train_loader:
images,labels = data
images,labels = images.to(device),labels.to(device)
images = images.permute(0,3,1,2)
optimizer.zero_grad()
print("hello")
outputs = model(images)
loss = criterion(outputs,labels)
epoch_train_loss += loss.item()
loss.backward()
optimizer.step()
accuracy = compute_accuracy(outputs,labels)
epoch_accuracy += accuracy
writer.add_scalar("Loss/training",epoch_train_loss,epoch)
writer.add_scalar("Accuracy/training",epoch_accuracy / len(train_loader),epoch)
print('epoch: %d loss: %.3f' % (epoch + 1,epoch_train_loss / len(train_loader)))
print('epoch: %d accuracy: %.3f' % (epoch + 1,epoch_accuracy / len(train_loader)))
epoch_accuracy = 0.0
# The code below computes the validation results
for data in val_loader:
images,2)
model.eval()
with torch.no_grad():
outputs = model(images)
accuracy = compute_accuracy(outputs,labels)
epoch_accuracy += accuracy
writer.add_scalar("Accuracy/validation",epoch_accuracy / len(val_loader),epoch)
print("finished training")
任何帮助将不胜感激。谢谢。
解决方法
nn.Flatten() 而不是 nn.Flatten(BATCH=64,input_size)
https://pytorch.org/docs/stable/generated/torch.nn.Flatten.html
版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。