微信公众号搜"智元新知"关注
微信扫一扫可直接关注哦!

用于大型数据集和多输入的自定义 ImageDataGenerator (Keras)

如何解决用于大型数据集和多输入的自定义 ImageDataGenerator (Keras)

我一直在尝试为来自 hdf5 文件(88k 配对图像的大型数据集)的两个输入和一个输出图像分类模型实现自定义 ImageDataGenerator。网络的工作原理如下:

  1. 每对图像都被输入到 VGG16 模型中,用于特征提取(两个网络共享参数,层被冻结以用于训练)
  2. 将每个 VGG16 网络的输出连接起来并输入 3-FC 层。
  3. 最终输出是决定这两个图像是否兼容的概率(用于服装匹配的上衣和下衣类型)。

这是我的自定义生成器的代码,它读取带有两个数据集的 hdf5,配对图像 (88000,2,224,3) 和标签 (88000,),1-匹配 0-不匹配。

class HDF5DataGenerator:
    def __init__(self,dbPath,batchSize,preprocessors=None,aug=None,binarize=True,classes=2):
        self.batchSize = batchSize
        self.preprocessors = preprocessors
        self.aug = aug
        self.binarize = binarize
        self.classes = classes
        
        self.db = h5py.File(dbPath,'r')
        self.numImages = self.db['images'].shape[0]
     
    def generator(self,passes=np.inf):
        epochs=0
        while epochs < passes:
            idx = np.array(range(0,numImages),dtype='int')
            np.random.shuffle(idx)
            for i in np.arange(0,self.numImages,self.batchSize):
                idxBatch = np.array(idx[i:i+batchSize])
                idxBatch.sort()
                
                imagesA = self.db['images'][idxBatch,0]
                imagesB = self.db['images'][idxBatch,1]
                labels = self.db['labels'][idxBatch]
                
                if self.binarize:
                    labels = to_categorical(labels,self.classes)
                    
                if self.preprocessors is not None:
                    procImagesA = []
                    for image in imagesA:
                        for p in self.preprocessors:
                            image = p.preprocess(image)
                        procImagesA.append(image)
                    imagesA = np.array(procImagesA)
                    
                    procImagesB = []
                    for image in imagesB:
                        for p in self.preprocessors:
                            image = p.preprocess(image)
                        procImagesB.append(image)
                    imagesB = np.array(procImagesB)
                
                if self.aug is not None:
                    (imagesA,labels) = next(self.aug.flow(imagesA,labels,batch_size=self.batchSize))
                    (imagesB,labels) = next(self.aug.flow(imagesB,batch_size=self.batchSize))
                
                yield [imagesA,imagesB],labels

            epochs +=1
    
    def close(self):
        self.db.close()                           

生成器传递给 fit_generation 函数时,如下所示:

trainGen = HDF5DataGenerator('train.hdf5',BATCH_SIZE,preprocessors=[mp,iap],aug=aug,classes=2)

history =  model.fit(trainGen.generator(),steps_per_epoch = trainGen.numImages // BATCH_SIZE,#validation_data= testGen.generator(),#validation_steps = testGen.numImages // BATCH_SIZE,epochs=EPOCHS,max_queue_size=10)

我收到以下错误,坦率地说我不明白。由于不兼容错误显示(None,1),我已经检查了文件中写入的所有图像的维度,这让我认为数据有问题,但这不是问题。

ValueError: in user code:

    /Users/nicolas/.virtualenvs/cv/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py:805 train_function  *
        return step_function(self,iterator)
    /Users/nicolas/.virtualenvs/cv/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py:795 step_function  **
        outputs = model.distribute_strategy.run(run_step,args=(data,))
    /Users/nicolas/.virtualenvs/cv/lib/python3.8/site-packages/tensorflow/python/distribute/distribute_lib.py:1259 run
        return self._extended.call_for_each_replica(fn,args=args,kwargs=kwargs)
    /Users/nicolas/.virtualenvs/cv/lib/python3.8/site-packages/tensorflow/python/distribute/distribute_lib.py:2730 call_for_each_replica
        return self._call_for_each_replica(fn,args,kwargs)
    /Users/nicolas/.virtualenvs/cv/lib/python3.8/site-packages/tensorflow/python/distribute/distribute_lib.py:3417 _call_for_each_replica
        return fn(*args,**kwargs)
    /Users/nicolas/.virtualenvs/cv/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py:788 run_step  **
        outputs = model.train_step(data)
    /Users/nicolas/.virtualenvs/cv/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py:755 train_step
        loss = self.compiled_loss(
    /Users/nicolas/.virtualenvs/cv/lib/python3.8/site-packages/tensorflow/python/keras/engine/compile_utils.py:203 __call__
        loss_value = loss_obj(y_t,y_p,sample_weight=sw)
    /Users/nicolas/.virtualenvs/cv/lib/python3.8/site-packages/tensorflow/python/keras/losses.py:152 __call__
        losses = call_fn(y_true,y_pred)
    /Users/nicolas/.virtualenvs/cv/lib/python3.8/site-packages/tensorflow/python/keras/losses.py:256 call  **
        return ag_fn(y_true,y_pred,**self._fn_kwargs)
    /Users/nicolas/.virtualenvs/cv/lib/python3.8/site-packages/tensorflow/python/util/dispatch.py:201 wrapper
        return target(*args,**kwargs)
    /Users/nicolas/.virtualenvs/cv/lib/python3.8/site-packages/tensorflow/python/keras/losses.py:1537 categorical_crossentropy
        return K.categorical_crossentropy(y_true,from_logits=from_logits)
    /Users/nicolas/.virtualenvs/cv/lib/python3.8/site-packages/tensorflow/python/util/dispatch.py:201 wrapper
        return target(*args,**kwargs)
    /Users/nicolas/.virtualenvs/cv/lib/python3.8/site-packages/tensorflow/python/keras/backend.py:4833 categorical_crossentropy
        target.shape.assert_is_compatible_with(output.shape)
    /Users/nicolas/.virtualenvs/cv/lib/python3.8/site-packages/tensorflow/python/framework/tensor_shape.py:1134 assert_is_compatible_with
        raise ValueError("Shapes %s and %s are incompatible" % (self,other))

    ValueError: Shapes (None,None) and (None,1) are incompatible

希望你们能帮助我指出解决问题的正确方向。

感谢您花时间阅读帖子!


编辑-1:

下面是要训练的模型的代码。我修复了尺寸问题。

from keras.layers import concatenate

img_shape = (224,3)

img_top = Input(shape=img_shape)
img_bottom = Input(shape=img_shape)
featureExtractor = vgg(img_shape)

feats_top = featureExtractor(img_top)
feats_bottom = featureExtractor(img_bottom)

combined = concatenate([feats_top,feats_bottom]) 

x = Dense(4096,activation='relu')(combined)
x = Dense(4096,activation='relu')(x)
x = Dense(4096,activation='relu')(x)
x = Dense(2,activation='softmax')(x)

model = Model(inputs=[img_top,img_bottom],outputs=x)

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。