如何解决关于torchmeta BatchMetaDataLoader
dataset = tm_datasets.FC100(root='./data',num_classes_per_task=5,transform=Compose([ToTensor()])
Meta_train=True,download=True)
dataset = Classsplitter(dataset,shuffle=shuffle,num_train_per_class=2,num_test_per_class=19)
DataLoader = BatchMetaDataLoader(dataset,batch_size=2,drop_last=True)
for batch in DataLoader:
samples,sample_labels = batch["train"]
样本的形状为torch.Size([2,10,3,32,32])
。
为什么sample_labesl是拖曳长度的清单?
sample_labels的值如下:
sample_labels[0] = [
[
('large_man-made_outdoor_things','trees'),('large_man-made_outdoor_things',('fruit_and_vegetables','household_electrical_devices'),('fish',('vehicles_2','fish'),('vehicles_1','household_furniture'),'household_furniture')
],[
('road','oak_tree'),('road',('apple','television'),('shark','pine_tree'),('lawn_mower','trout'),('bus','bed'),'bed')
]
]
sample_labels[1] = tensor(
[
[0,0],[0,0]
]
)
版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。