如何解决Pytorch CNN 训练中的“RuntimeError:预期标量类型为 Double 但发现 Float”
我刚刚开始学习 Pytorch 并创建了我的第一个 CNN。数据集包含 3360 张 RGB 图像,我将它们转换为 [3360,3,224,224]
张量。数据和标签在 dataset(torch.utils.data.TensorDataset)
中。下面是训练代码。
def train_net():
dataset = ld.load()
data_iter = Data.DataLoader(dataset,batch_size=168,shuffle=True)
net = model.VGG_19()
summary(net,(3,224),device="cpu")
loss_func = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(),lr=0.001,momentum=0.9,dampening=0.1)
scheduler = torch.optim.lr_scheduler.StepLR(optimizer,step_size=50,gamma=0.1)
for epoch in range(5):
print("epoch:",epoch + 1)
train_loss = 0
for i,data in enumerate(data_iter,0):
x,y = data
print(x.dtype)
optimizer.zero_grad()
out = net(x)
loss = loss_func(out,y)
loss.backward()
optimizer.step()
train_loss += loss.item()
if i % 100 == 99:
print("loss:",train_loss / 100)
train_loss = 0.0
print("finish train")
Traceback (most recent call last):
File "D:/python/DeepLearning/VGG/train.py",line 52,in <module>
train_net()
File "D:/python/DeepLearning/VGG/train.py",line 29,in train_net
out = net(x)
File "D:\python\lib\site-packages\torch\nn\modules\module.py",line 727,in _call_impl
result = self.forward(*input,**kwargs)
File "D:\python\DeepLearning\VGG\model.py",line 37,in forward
out = self.conv3_64(x)
File "D:\python\lib\site-packages\torch\nn\modules\module.py",**kwargs)
File "D:\python\lib\site-packages\torch\nn\modules\container.py",line 117,in forward
input = module(input)
File "D:\python\lib\site-packages\torch\nn\modules\module.py",**kwargs)
File "D:\python\lib\site-packages\torch\nn\modules\conv.py",line 423,in forward
return self._conv_forward(input,self.weight)
File "D:\python\lib\site-packages\torch\nn\modules\conv.py",line 419,in _conv_forward
return F.conv2d(input,weight,self.bias,self.stride,RuntimeError: expected scalar type Double but found Float
我认为 x 有问题,我用 print(x.dtype)
打印它的类型:
torch.float64
这是双精度而不是浮点数。你知道怎么回事吗?感谢您的帮助!
解决方法
该错误实际上是指在调用矩阵乘法时默认情况下 float32
中的 conv 层的权重。由于您的输入是 double
(pytorch 中的 float64
) 而 conv 中的权重是 float
所以你的情况的解决方案是:
def train_net():
dataset = ld.load()
data_iter = Data.DataLoader(dataset,batch_size=168,shuffle=True)
net = model.VGG_19()
summary(net,(3,224,224),device="cpu")
loss_func = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(),lr=0.001,momentum=0.9,dampening=0.1)
scheduler = torch.optim.lr_scheduler.StepLR(optimizer,step_size=50,gamma=0.1)
for epoch in range(5):
print("epoch:",epoch + 1)
train_loss = 0
for i,data in enumerate(data_iter,0):
x,y = data
x = x.float() # HERE IS THE CHANGE
print(x.dtype)
optimizer.zero_grad()
out = net(x)
loss = loss_func(out,y)
loss.backward()
optimizer.step()
train_loss += loss.item()
if i % 100 == 99:
print("loss:",train_loss / 100)
train_loss = 0.0
print("finish train")
这肯定会奏效
版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。