微信公众号搜"智元新知"关注
微信扫一扫可直接关注哦!

如何使用 Siamese Networks 和 Functional API 在 keras 中获得中间层的结果?

如何解决如何使用 Siamese Networks 和 Functional API 在 keras 中获得中间层的结果?

我有以下 siamese 网络的网络定义:

def build_siamese_model(inputShape,embeddingDim=48):
    # specify the inputs for the feature extractor network
    inputs = Input(inputShape)

    ## first set of CONV => RELU => RESID=> POOL => DROPOUT layers
    first_conv1 = Conv2D(32,(3,3),padding="same")(inputs)
    first_batch_norm1=Batchnormalization()(first_conv1)
    first_act1= LeakyReLU()(first_batch_norm1)

    second_conv1 = Conv2D(32,(5,5),padding="same")(inputs)
    second_batch_norm1=Batchnormalization()(second_conv1)
    second_act1= LeakyReLU()(second_batch_norm1)

    third_conv1 = Conv2D(32,(7,7),padding="same")(inputs)
    third_batch_norm1=Batchnormalization()(third_conv1)
    third_act1= LeakyReLU()(third_batch_norm1)

    residual_block1= Add()([first_act1,second_act1,third_act1])
    pool1 = MaxPooling2D(pool_size=(2,2))(residual_block1)
    dropout1 = Dropout(0.3)(pool1)

    #receiver Convolutional layer
    receiver1_conv = Conv2D(32,padding="same")(dropout1)
    receiver1_batch_norm=Batchnormalization()(receiver1_conv)
    act_receiver1=LeakyReLU()(receiver1_batch_norm)

    ## second set of CONV => BN=> RELU => RESID=> POOL => DROPOUT layers
    first_conv2 = Conv2D(32,padding="same")(act_receiver1)
    first_batch_norm2=Batchnormalization()(first_conv2)
    first_act2= LeakyReLU()(first_batch_norm2)

    second_conv2 = Conv2D(32,padding="same")(act_receiver1)
    second_batch_norm2=Batchnormalization()(second_conv2)
    second_act2= LeakyReLU()(second_batch_norm2)

    third_conv2 = Conv2D(32,padding="same")(act_receiver1)
    third_batch_norm2=Batchnormalization()(third_conv2)
    third_act2= LeakyReLU()(third_batch_norm2)
    
    residual_block2= Add()([first_act2,second_act2,third_act2])
    pool2 = MaxPooling2D(pool_size=(2,2))(residual_block2)
    dropout2 = Dropout(0.3)(pool2)
    
    #receiver Convolutional layer
    receiver2_conv = Conv2D(32,padding="same")(dropout2)
    receiver2_batch_norm=Batchnormalization()(receiver2_conv)
    act_receiver2=LeakyReLU()(receiver2_batch_norm)

    ## last set of CONV => BN=> RELU => RESID=> POOL => DROPOUT layers
    first_conv3 = Conv2D(32,padding="same")(act_receiver2)
    first_batch_norm3=Batchnormalization()(first_conv3)
    first_act3= LeakyReLU()(first_batch_norm3)

    second_conv3 = Conv2D(32,padding="same")(act_receiver2)
    second_batch_norm3=Batchnormalization()(second_conv3)
    second_act3= LeakyReLU()(second_batch_norm3)

    third_conv3 = Conv2D(32,padding="same")(act_receiver2)
    third_batch_norm3=Batchnormalization()(third_conv3)
    third_act3= LeakyReLU()(third_batch_norm3)
        
    residual_block3= Add()([first_act3,second_act3,third_act3])
    pool3 = MaxPooling2D(pool_size=(2,2))(residual_block3)
    dropout3 = Dropout(0.3)(pool3)
    
    #last receiver Convolutional layer
    receiver3_conv = Conv2D(32,padding="same")(dropout3)
    receiver3_batch_norm=Batchnormalization()(receiver3_conv)
    act_receiver3=LeakyReLU()(receiver3_batch_norm)

    # prepare the final outputs
    pooledOutput = GlobalAveragePooling2D()(act_receiver3)
    outputs = Dense(embeddingDim)(pooledOutput)
    # build the model
    model = Model(inputs,outputs)
    return(model)

但是,这部分作为功能性 API 连接到我的网络的输入和输出。以下是我如何链接这些部分:

print("[INFO] building siamese network...")
imgA = Input(shape=config.IMG_SHAPE)
imgB = Input(shape=config.IMG_SHAPE)

featureExtractor = build_siamese_model(config.IMG_SHAPE)

featsA = featureExtractor(imgA)
featsB = featureExtractor(imgB)

distance = Lambda(utils.euclidean_distance)([featsA,featsB])

outputs = Dense(1,activation="sigmoid")(distance)
model = Model(inputs=[imgA,imgB],outputs=outputs)

但是,在编译模型时,这里是模型的摘要

enter image description here

因此,我上面所做的网络定义似乎只是网络的一层。

那么,我想要什么?

我想加载模型并提取特定层的输出。特别是,我想要功能对象最后一层的输出(上面网络定义中的outputs = Dense(48)(pooledOutput))。这将为我在模型中测试的每对图像提供 48 个特征向量。

我尝试检查一些 Previous posts 并执行以下操作:

print("Step 1: Loading Model")

model1=load_model("where/the/model/is/located",compile=False)

#I tried the output of the firstlayer,for example
model_with_intermediate_layers = Model(inputs=model1.input,outputs = model1.layers[0].output)

pred = model_with_intermediate_layers.predict([pair_1,pair_2],steps = 1) 
print(pred) 

有什么问题?

上面代码的问题在于它只能访问来自 0、1、3 和 4 的层。0 和 1 给出输入形状,第 3 层给我分数,第 4 层是空的。 **我想访问中间层,尤其是特征提取器网络的最后一层。 ** 我该怎么做?

解决方法

考虑到 (i) 我的 Functional 对象是网络的第二层; (ii) 我想要它的最终层输出; (iii) 第二层的输出是第三层的输入,我用下面的代码解决了这个问题:

#I am getting layer's 3 input,which is the same as the second layer's output (last layer of my functional model)
model_intermediate = Model(inputs=model1.input,outputs = model1.layers[3].input)

#Here I get 2 48-d vectors.
pred_intermediate = model_intermediate.predict([pair_1,pair_2],steps = 1) # predict_generator is deprecated

pred_intermediate=np.array(pred_intermediate)

print(type(pred_intermediate))
print(pred_intermediate)
print(pred_intermediate.shape)
input()

它给了我我想要的

enter image description here

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。