如何解决无法通过使用自定义块复制自动keras模型
我正在使用StructuredDataRegressor
进行一些预测。我假设生成的“ oracle.json”描述了
模型架构和超参数,以找到最佳模型。这就是“ oracle.json”的相关部分所说的:
"values": {
"structured_data_block_1/normalize": false,"structured_data_block_1/dense_block_1/num_layers": 2,"structured_data_block_1/dense_block_1/use_batchnorm": false,"structured_data_block_1/dense_block_1/dropout": 0,"structured_data_block_1/dense_block_1/units_0": 32,"structured_data_block_1/dense_block_1/units_1": 32,"regression_head_1/dropout": 0,"optimizer": "adam","learning_rate": 0.001,"structured_data_block_1/dense_block_1/units_2": 32
}
我该如何使用Blocks复制代码,这样我只需要使用这些值精确地训练一次?
这是我尝试过的一些东西。
请注意,我尝试的每种方法都是分别尝试的。我没有尝试一起运行所有这些代码。这里都在一起,所以我不必打开多个代码部分。
如果这些代码节中的任何一个都可以工作,请告诉我,我可以详细说明确切的失败或不正常的地方。
# I assume I always need this
input_node = ak.StructuredDatainput()
# Played around with StructuredDataBlock,but there is no way to
# pass `num_layers`,`use_batchnorm`,`dropout` to the
# DenseBlock that it internally uses.
output_node = ak.StructuredDataBlock(
normalize=False,# **{
# "num_layers": 2,# "use_batchnorm": False,# "dropout": 0,# }
# num_layers=2,# use_batchnorm=False,# dropout=0,)(input_node)
# Tried replicate what StructuredDataBlock is doing. It runs,# but the resulting "oracle.json" does not look like the
# original one,and it does not respect
# `num_layers`,`dropout`.
output_node = ak.CategoricalToNumerical()(input_node)
output_node = ak.normalization()(output_node)
output_node = ak.DenseBlock(
num_layers=2,use_batchnorm=False,dropout=0.0,)(output_node)
# Tried to use DenseBlock exactly the way it's used in
# "StructuredDataBlock",but that also didn't work.
hp = HyperParameters()
output_node = input_node
block = ak.CategoricalToNumerical()
output_node = block.build(hp,output_node)
output_node = ak.DenseBlock().build(hp,output_node)
# This is the regression head,which I assume I will always need anyway.
output_node = ak.RegressionHead(
loss=DimaLoss(),metrics=[LOSS_REGISTRY[self.loss_name]],)(output_node)
# AutoModel
auto_model = ak.AutoModel(
project_name=PROJECT_NAME,directory=dima_trainer_dir,overwrite=True,max_trials=self.max_trials,inputs=input_node,outputs=output_node,objective=kerastuner.Objective(f"val_{self.loss_name}",direction="min"),)
# fit
auto_model.fit(
x=df_train_X,y=df_train_y,epochs=self.epochs,callbacks=[early_stopping],validation_split=0.15,**{
"batch_size": self.batch_size,# "optimizer": "adam",# "learning_rate": 0.002,},# optimizer="adam",# learning_rate=0.002,)
版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。