微信公众号搜"智元新知"关注
微信扫一扫可直接关注哦!

使用目录迭代器的 Keras Hyperband 搜索

如何解决使用目录迭代器的 Keras Hyperband 搜索

我正在使用 Tensorflow 的 flow_from_directory 来收集大型图像数据集,然后对其进行训练。我想使用 Keras Tuner 但是当我运行时

tuner.search(test_data_gen,epochs=50,validation_split=0.2,callbacks=[stop_early]) 

它抛出以下错误

ValueError: `validation_split` is only supported for Tensors or NumPy arrays,found following types in the input: [<class 'tensorflow.python.keras.preprocessing.image.DirectoryIterator'>] 

我对 AI 中的数据类型之间的转换知之甚少,因此非常感谢您的帮助。

这是我的其余代码

import tensorflow as tf
from tensorflow import keras
import numpy as np
import matplotlib.pyplot as plt
import IPython.display as display
from PIL import Image,ImageSequence
import os
import pathlib
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense,Conv2D,Flatten,Dropout,MaxPooling2D
from tensorflow.keras.preprocessing.image import ImageDataGenerator
import cv2
import datetime
import kerastuner as kt

tf.compat.v1.enable_eager_execution()

epochs = 50
steps_per_epoch = 10
batch_size = 20
IMG_HEIGHT = 200
IMG_WIDTH = 200

train_dir = "Data/Train"
test_dir = "Data/Val"

train_image_generator = ImageDataGenerator(rescale=1. / 255)

test_image_generator = ImageDataGenerator(rescale=1. / 255)

train_data_gen = train_image_generator.flow_from_directory(batch_size=batch_size,directory=train_dir,shuffle=True,target_size=(IMG_HEIGHT,IMG_WIDTH),class_mode='sparse')

test_data_gen = test_image_generator.flow_from_directory(batch_size=batch_size,directory=test_dir,class_mode='sparse')


    def model_builder(hp):
        model = keras.Sequential()
        model.add(Conv2D(265,3,padding='same',activation='relu',input_shape=(IMG_HEIGHT,IMG_WIDTH,3)))
        model.add(MaxPooling2D())
        model.add(Conv2D(64,activation='relu'))
        model.add(MaxPooling2D())
        model.add(Conv2D(32,activation='relu'))
        model.add(MaxPooling2D())
        model.add(Flatten())
        model.add(keras.layers.Dense(256,activation="relu"))
        hp_units = hp.Int('units',min_value=32,max_value=512,step=32)
        model.add(keras.layers.Dense(hp_units,activation="relu"))
        model.add(keras.layers.Dense(80,activation="softmax"))
    
        hp_learning_rate = hp.Choice('learning_rate',values=[1e-2,1e-3,1e-4])
    
        model.compile(optimizer=keras.optimizers.Adam(learning_rate=hp_learning_rate),loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),metrics=['top_k_categorical_accuracy'])
    
        return model
    
    tuner = kt.Hyperband(model_builder,objective='val_accuracy',max_epochs=30,factor=3,directory='Hypertuner_Dir',project_name='AIOS')
    
    stop_early = tf.keras.callbacks.EarlyStopping(monitor='val_loss',patience=5)

并使用 tuner

开始搜索
tuner.search(train_data_gen,callbacks=[stop_early])

# Get the optimal hyperparameters
best_hps=tuner.get_best_hyperparameters(num_trials=1)[0]

print(f"""
The hyperparameter search is complete. The optimal number of units in the first densely-connected
layer is {best_hps.get('units')} and the optimal learning rate for the optimizer
is {best_hps.get('learning_rate')}.
""")

model = tuner.hypermodel.build(best_hps)

model.summary()
tf.keras.utils.plot_model(model,to_file="model.png",show_shapes=True,show_layer_names=True,rankdir='TB')
checkpoint_path = "training/cp.ckpt"
checkpoint_dir = os.path.dirname(checkpoint_path)

cp_callback = tf.keras.callbacks.ModelCheckpoint(filepath=checkpoint_path,save_weights_only=True,verbose=1)

os.system("rm -r logs")

log_dir = "logs/fit/" + datetime.datetime.Now().strftime("%Y%m%d-%H%M%s")
tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=log_dir,histogram_freq=1)

#history = model.fit(train_data_gen,steps_per_epoch=steps_per_epoch,epochs=epochs,validation_data=test_data_gen,validation_steps=10,callbacks=[cp_callback,tensorboard_callback])
history = model.fit(train_data_gen,tensorboard_callback])
model.load_weights(tf.train.latest_checkpoint(checkpoint_dir))
model.save('model.h5',include_optimizer=True)

test_loss,test_acc = model.evaluate(test_data_gen)
print("Tested Acc: ",test_acc)
print("Tested Acc: ",test_acc*100,"%")

val_acc_per_epoch = history.history['val_accuracy']
best_epoch = val_acc_per_epoch.index(max(val_acc_per_epoch)) + 1
print('Best epoch: %d' % (best_epoch,))

====================================编辑==== ================================

enter image description here

解决方法

不幸的是,在这种情况下执行 validation_split=0.2 不起作用,因为此参数假定数据是 Tensor 或 NumPy 数组。由于您将数据存储为生成器(这是个好主意),因此不能简单地拆分它。

您需要创建一个验证生成器,就像您对 test_data_gen 所做的一样,并将 validation_split=0.2 更改为 validation_data=val_data_gen

,

根据doc关于validation_split

validation_split:在 0 和 1 之间浮动。用作验证数据的训练数据的分数。该模型将把这部分训练数据分开,不会对其进行训练,并将在每个时期结束时评估损失和此数据的任何模型指标。在混洗之前,验证数据是从提供的 x 和 y 数据中的最后一个样本中选择的。当 x 是数据集、生成器或 keras.utils.Sequence 实例时,不支持此参数。

现在,当您有了生成器时,请尝试如下操作,reference

tuner.search(train_data_gen,epochs=50,validation_data=test_data_gen,callbacks=[stop_early])

此外,请确保您的每个生成器正确生成有效批次。

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。