微信公众号搜"智元新知"关注
微信扫一扫可直接关注哦!

VGG16净验证精度 从保存的文件中加载重量

如何解决VGG16净验证精度 从保存的文件中加载重量

我尝试对16个图像类别标记的数据进行分类。我获得了60%的验证准确性,但是当我尝试预测验证数据上的类标签时。我的准确率达到了6%。

从保存的文件中加载重量

```
model.load_weights("weights_copy.best.hdf5")
valid_generator.reset()
nb_validation_samples = len(valid_generator.classes)
pred= model.predict_generator(valid_generator,nb_validation_samples//batch_size)
predicted_class_indices=np.argmax(pred,axis=1)
labels=(valid_generator.class_indices)
labels2=dict((v,k) for k,v in labels.items())
predictions=[labels2[k] for k in predicted_class_indices]
true_labels=[labels2[k] for k in valid_generator.classes]
```

您能告诉我为什么会这样吗?

解决方法

在使用predict_generator进行预测后,我找不到您为手动计算精度而实现的代码。

下面是我运行了一个简单的猫和狗分类程序-

  1. 验证准确性会在时代结束后显示。
  2. 然后使用save_weights保存权重。
  3. 后来使用load_weights加载了模型。
  4. 使用sklearn的{​​{1}}和计算出的准确性来构建混淆矩阵。

在时代结束之后计算出的验证准确性与人工计算出的准确性相匹配。

代码-

confusion_matrix

输出-

import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense,Conv2D,Flatten,Dropout,MaxPooling2D
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.optimizers import Adam

import os
import numpy as np
import matplotlib.pyplot as plt

_URL = 'https://storage.googleapis.com/mledu-datasets/cats_and_dogs_filtered.zip'

path_to_zip = tf.keras.utils.get_file('cats_and_dogs.zip',origin=_URL,extract=True)

PATH = os.path.join(os.path.dirname(path_to_zip),'cats_and_dogs_filtered')

train_dir = os.path.join(PATH,'train')
validation_dir = os.path.join(PATH,'validation')

train_cats_dir = os.path.join(train_dir,'cats')  # directory with our training cat pictures
train_dogs_dir = os.path.join(train_dir,'dogs')  # directory with our training dog pictures
validation_cats_dir = os.path.join(validation_dir,'cats')  # directory with our validation cat pictures
validation_dogs_dir = os.path.join(validation_dir,'dogs')  # directory with our validation dog pictures

num_cats_tr = len(os.listdir(train_cats_dir))
num_dogs_tr = len(os.listdir(train_dogs_dir))

num_cats_val = len(os.listdir(validation_cats_dir))
num_dogs_val = len(os.listdir(validation_dogs_dir))

total_train = num_cats_tr + num_dogs_tr
total_val = num_cats_val + num_dogs_val

batch_size = 128
epochs = 1
IMG_HEIGHT = 150
IMG_WIDTH = 150

train_image_generator = ImageDataGenerator(rescale=1./255,brightness_range=[0.5,1.5]) # Generator for our training data
validation_image_generator = ImageDataGenerator(rescale=1./255,1.5]) # Generator for our validation data

train_data_gen = train_image_generator.flow_from_directory(batch_size=batch_size,directory=train_dir,shuffle=True,target_size=(IMG_HEIGHT,IMG_WIDTH),class_mode='binary')

val_data_gen = validation_image_generator.flow_from_directory(batch_size=batch_size,directory=validation_dir,class_mode='binary')

model = Sequential([
    Conv2D(16,3,padding='same',activation='relu',input_shape=(IMG_HEIGHT,IMG_WIDTH,3)),MaxPooling2D(),Conv2D(32,activation='relu'),Conv2D(64,Flatten(),Dense(512,Dense(1)
])

optimizer = 'SGD'

model.compile(optimizer=optimizer,loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),metrics=['accuracy'])

history = model.fit_generator(
          train_data_gen,steps_per_epoch=total_train // batch_size,epochs=epochs,validation_data=val_data_gen,validation_steps=total_val // batch_size)

# Save the weights
model.save_weights('my_model.hdf5')

from sklearn.metrics import confusion_matrix

# Load the weights
model.load_weights('my_model.hdf5')

# Reset 
val_data_gen.reset()

# Predictions
pred = model.predict_generator(val_data_gen)
predicted = np.argmax(pred,axis=1)

# Actual Labels
labels = val_data_gen.classes

# Compute Accuracy
conf_mat = confusion_matrix(predicted,labels)
acc = np.sum(conf_mat.diagonal()) / np.sum(conf_mat)
print('Validation accuracy: {} %'.format(acc*100))

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。