微信公众号搜"智元新知"关注
微信扫一扫可直接关注哦!

pytorch中的超参数优化当前使用sklearn GridSearchCV

如何解决pytorch中的超参数优化当前使用sklearn GridSearchCV

我使用此(link)pytorch教程,并希望在其中添加网格搜索功能sklearn.model_selection.gridsearchcvlink),以优化超级参数。我很难理解gs.fit(x,y)中的X和Y应该是什么;根据文档(link),x和y应该具有以下结构,但是我很难弄清楚如何从代码删除它们。 PennFudanDataset类的输出以与我需要的X,Y不对齐的形式返回img和target。 在以下代码块中或在与模型有关的教程块中,n_samples,n_features是否在其中?

fit(X,y=None,*,groups=None,**fit_params)[source]

Run fit with all sets of parameters.

参数

Xarray-like of shape (n_samples,n_features)
Training vector,where n_samples is the number of samples and n_features is the number of features.

yarray-like of shape (n_samples,n_output) or (n_samples,),default=None
Target relative to X for classification or regression; None for unsupervised learning.

对于这个特定的教程,我们还有其他更容易实现的东西吗?我已经读过有关ray tune(link),optuna(link)等的内容,但是它们看起来比这更复杂。我目前也正在研究scipy.optimize.brute(link),这似乎更简单。

PennFundanDataset类:

import os
import numpy as np
import torch
from PIL import Image


class PennFudanDataset(object):
def __init__(self,root,transforms):
    self.root = root
    self.transforms = transforms
    # load all image files,sorting them to
    # ensure that they are aligned
    self.imgs = list(sorted(os.listdir(os.path.join(root,"PNGImages"))))
    self.masks = list(sorted(os.listdir(os.path.join(root,"PedMasks"))))

def __getitem__(self,idx):
    # load images ad masks
    img_path = os.path.join(self.root,"PNGImages",self.imgs[idx])
    mask_path = os.path.join(self.root,"PedMasks",self.masks[idx])
    img = Image.open(img_path).convert("RGB")
    # note that we haven't converted the mask to RGB,# because each color corresponds to a different instance
    # with 0 being background
    mask = Image.open(mask_path)
    # convert the PIL Image into a numpy array
    mask = np.array(mask)
    # instances are encoded as different colors
    obj_ids = np.unique(mask)
    # first id is the background,so remove it
    obj_ids = obj_ids[1:]

    # split the color-encoded mask into a set
    # of binary masks
    masks = mask == obj_ids[:,None,None]

    # get bounding Box coordinates for each mask
    num_objs = len(obj_ids)
    Boxes = []
    for i in range(num_objs):
        pos = np.where(masks[i])
        xmin = np.min(pos[1])
        xmax = np.max(pos[1])
        ymin = np.min(pos[0])
        ymax = np.max(pos[0])
        Boxes.append([xmin,ymin,xmax,ymax])

    # convert everything into a torch.Tensor
    Boxes = torch.as_tensor(Boxes,dtype=torch.float32)
    # there is only one class
    labels = torch.ones((num_objs,dtype=torch.int64)
    masks = torch.as_tensor(masks,dtype=torch.uint8)

    image_id = torch.tensor([idx])
    area = (Boxes[:,3] - Boxes[:,1]) * (Boxes[:,2] - Boxes[:,0])
    # suppose all instances are not crowd
    iscrowd = torch.zeros((num_objs,dtype=torch.int64)

    target = {}
    target["Boxes"] = Boxes
    target["labels"] = labels
    target["masks"] = masks
    target["image_id"] = image_id
    target["area"] = area
    target["iscrowd"] = iscrowd

    if self.transforms is not None:
        img,target = self.transforms(img,target)

    return img,target

def __len__(self):
    return len(self.imgs)

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。