微信公众号搜"智元新知"关注
微信扫一扫可直接关注哦!

制作“torch.nn.ModuleList()”列表以在YOLOv4主干中设计CSPDarkNet53

如何解决制作“torch.nn.ModuleList()”列表以在YOLOv4主干中设计CSPDarkNet53

我指的是 this code。它用于设计 YOlov4。我有一个关于制作 backbone() 的问题。 (=CSPDarkNet53= CSPNet + DarkNet53)

代码在(yolov4>models.py)

class Backbone(nn.Module):
    def __init__(self,yolov4conv137weight=None,inference=False):
        self.down1 = DownSample1()
        self.down2 = DownSample2()
        self.down3 = DownSample3()
        self.down4 = DownSample4()
        self.down5 = DownSample5()
        self.end = TransferClassify()

        # neck
        self.neek = Neck(inference)
        # yolov4conv137
        if yolov4conv137weight:
            _model = nn.Sequential(self.down1,self.down2,self.down3,self.down4,self.down5,self.neek)
            pretrained_dict = torch.load(yolov4conv137weight)

            model_dict = _model.state_dict()
            # 1. filter out unnecessary keys
            pretrained_dict = {k1: v for (k,v),k1 in zip(pretrained_dict.items(),model_dict)}
            # 2. overwrite entries in the existing state dict
            model_dict.update(pretrained_dict)
            _model.load_state_dict(model_dict)

        self._model = nn.Sequential(self.down1,self.end)

    def forward(self,x):
        return self._model(x)

问题是 DownSample2()

class ResBlock(nn.Module):
    """
    Sequential residual blocks each of which consists of \
    two convolution layers.
    Args:
        ch (int): number of input and output channels.
        nblocks (int): number of residual blocks.
        shortcut (bool): if True,residual tensor addition is enabled.
    """

    def __init__(self,ch,nblocks=1,shortcut=True):
        super().__init__()
        self.shortcut = shortcut
        self.module_list = nn.ModuleList()
        for i in range(nblocks):
            resblock_one = nn.ModuleList()
            resblock_one.append(Conv_Bn_Activation(ch,1,'mish'))
            resblock_one.append(Conv_Bn_Activation(ch,3,'mish'))
            self.module_list.append(resblock_one)

    def forward(self,x):
        for module in self.module_list:
            h = x
            for res in module:
                h = res(h)
            x = x + h if self.shortcut else h
        return x

class DownSample2(nn.Module):
    def __init__(self):
        super().__init__()
        self.conv1 = Conv_Bn_Activation(64,128,2,'mish')
        self.conv2 = Conv_Bn_Activation(128,64,'mish')
        # r -2
        self.conv3 = Conv_Bn_Activation(128,'mish')

        self.resblock = ResBlock(ch=64,nblocks=2)

        # s -3
        self.conv4 = Conv_Bn_Activation(64,'mish')
        # r -1 -10
        self.conv5 = Conv_Bn_Activation(128,'mish')

    def forward(self,input):
        x1 = self.conv1(input)
        x2 = self.conv2(x1)
        x3 = self.conv3(x1)

        r = self.resblock(x3)
        x4 = self.conv4(r)

        x4 = torch.cat([x4,x2],dim=1)
        x5 = self.conv5(x4)
        return x5

他们制作了一个 ResBlock() 类并用于制作 DownSample2() 层。

ResBlock() 中,有一个 for 循环,如下所示:

    self.module_list = nn.ModuleList()
    for i in range(nblocks):
        resblock_one = nn.ModuleList()
        resblock_one.append(Conv_Bn_Activation(ch,'mish'))
        resblock_one.append(Conv_Bn_Activation(ch,'mish'))
        self.module_list.append(resblock_one)

他们试图在 Conv_Bn_Activation数量期间使用具有不同步幅 1 和 3 的 nblocks 层重复这些层,并附加到 nn.ModuleList()

好像是直接附加到“resblock_one的ModuleList()”后面,最后到“self.module_list的ModuleList()”,对吧?那么,为什么不直接追加到 self.moudle_list 中呢?

它产生了一些不同的效果,或者我的方法错误的。我的意思是:

   self.module_list = nn.ModuleList()
    for i in range(nblocks):
        self.module_list.append(Conv_Bn_Act_layer(ch,'mish'))
        self.module_list.append(Conv_Bn_Act_layer(ch,'mish'))

这个代码可以代替上面的吗?

现在,我明白了......在“转发”中,他们想要部分使用?

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。