微信公众号搜"智元新知"关注
微信扫一扫可直接关注哦!

Pytorch:一小部分过度拟合:调试

如何解决Pytorch:一小部分过度拟合:调试

我正在构建一个多类图像分类器。
一个调试技巧可以使单个批次过度拟合,以检查程序中是否存在更深层的错误
如何以一种可移植的格式来设计代码
一种艰巨而又不明智的方法是为一小批构建一个保持训练/测试文件夹,其中测试类由2个分布组成-可见数据和看不见数据,如果模型在可见数据上表现更好而在看不见数据上表现不佳,则我们可以得出结论,我们的网络没有任何更深层次的结构性错误
但是,这似乎不是一种明智且可移植的方法,并且必须解决每个问题。

当前,我有一个数据集类,我将按照以下方式在train / dev / test中对数据进行分区-

def split_equal_into_val_test(csv_file=None,stratify_colname='y',frac_train=0.6,frac_val=0.15,frac_test=0.25,):
    """
    Split a Pandas dataframe into three subsets (train,val,and test).

    Following fractional ratios provided by the user,where val and
    test set have the same number of each classes while train set have
    the remaining number of left classes
    Parameters
    ----------
    csv_file : Input data csv file to be passed
    stratify_colname : str
        The name of the column that will be used for stratification. Usually
        this column would be for the label.
    frac_train : float
    frac_val   : float
    frac_test  : float
        The ratios with which the dataframe will be split into train,and
        test data. The values should be expressed as float fractions and should
        sum to 1.0.
    random_state : int,None,or RandomStateInstance
        Value to be passed to train_test_split().

    Returns
    -------
    df_train,df_val,df_test :
        Dataframes containing the three splits.

    """
    df = pd.read_csv(csv_file).iloc[:,1:]

    if frac_train + frac_val + frac_test != 1.0:
        raise ValueError('fractions %f,%f,%f do not add up to 1.0' %
                         (frac_train,frac_val,frac_test))

    if stratify_colname not in df.columns:
        raise ValueError('%s is not a column in the dataframe' %
                         (stratify_colname))

    df_input = df

    no_of_classes = 4
    sfact = int((0.1*len(df))/no_of_classes)

    # Shuffling the data frame
    df_input = df_input.sample(frac=1)


    df_temp_1 = df_input[df_input['labels'] == 1][:sfact]
    df_temp_2 = df_input[df_input['labels'] == 2][:sfact]
    df_temp_3 = df_input[df_input['labels'] == 3][:sfact]
    df_temp_4 = df_input[df_input['labels'] == 4][:sfact]

    dev_test_df = pd.concat([df_temp_1,df_temp_2,df_temp_3,df_temp_4])
    dev_test_y = dev_test_df['labels']
    # Split the temp dataframe into val and test dataframes.
    df_val,df_test,dev_Y,test_Y = train_test_split(
        dev_test_df,dev_test_y,stratify=dev_test_y,test_size=0.5,)


    df_train = df[~df['img'].isin(dev_test_df['img'])]

    assert len(df_input) == len(df_train) + len(df_val) + len(df_test)

    return df_train,df_test

def train_val_to_ids(train,test,stratify_columns='labels'): # noqa
    """
    Convert the stratified dataset in the form of dictionary : partition['train] and labels.

    To generate the parallel code according to https://stanford.edu/~shervine/blog/pytorch-how-to-generate-data-parallel
    Parameters
    -----------
    csv_file : Input data csv file to be passed
    stratify_columns : The label column

    Returns
    -----------
    partition,labels:
        partition dictionary containing train and validation ids and label dictionary containing ids and their labels # noqa

    """
    train_list,val_list,test_list = train['img'].to_list(),val['img'].to_list(),test['img'].to_list() # noqa
    partition = {"train_set": train_list,"val_set": val_list,}
    labels = dict(zip(train.img,train.labels))
    labels.update(dict(zip(val.img,val.labels)))
    return partition,labels

P.S-我了解Pytorch闪电,并且知道它们具有过拟合功能,可以轻松使用,但我不想转向PyTorch闪电。

解决方法

我不知道它的便携性,但是我使用的一个技巧是修改__len__中的Dataset函数。

如果我对它进行了修改

def __len__(self):
    return len(self.data_list)

def __len__(self):
    return 20

它将仅输出数据集中的前20个元素(与随机播放无关)。您只需要更改一行代码,其余代码就可以正常工作,因此我认为它很整洁。

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。