如何解决OOP的新功能混淆了声明变量
在pytorch上构建一个神经网络,并尝试使完全连接的层接受我调用过的扁平层所产生的尺寸。
到目前为止,我正在执行以下操作:
from torch import nn
from torch.nn import functional as F
class my_model(nn.Module):
def __init__(self,channel=16,out_class=1,flat_dim=24):
super(my_model,self).__init__()
self.channel = channel
n_class = out_class
self.conv1 = nn.Conv3d(1,channel // 8,kernel_size=5,stride=2,padding=1)
self.pool1 = nn.AvgPool3d(2)
self.bn1 = nn.BatchNorm3d(channel //8)
self.conv2 = nn.Conv3d(channel // 8,channel // 4,kernel_size=4,padding=1)
self.bn2 = nn.BatchNorm3d(channel // 4)
self.pool2 = nn.AvgPool3d(2)
self.fc1 = nn.Linear(flat_dim,500)
self.fc2 = nn.Linear(500,100)
def forward(self,x):
batch_size,D,W,H,T = x.shape
print('input to model',x.shape)
x = x.permute(0,4,1,2,3)
x = x.reshape(batch_size * T,H)
print('after reshape',x.shape)
h1 = F.relu(self.bn1(self.pool1(self.conv1(x))))
print('after first layer sequence',h1.shape)
h2 = F.relu(self.bn2(self.pool2(self.conv2(h1))))
print('output to model',h2.shape)
h2,flat_dim = self.flatten_cube(h2)
h3 = self.fc1(h2)
h4 = self.fc2(h3)
print(h4.shape)
return h4
def flatten_cube(self,x):
super_batch,C,H = x.shape
flat_dim = C * D * W * H
y = x.view(super_batch,flat_dim)
return y,flat_dim
这是这样做的正确方法吗?
版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。