AutoKeras:超出“ max_trials”计数的额外试用

如何解决AutoKeras:超出“ max_trials”计数的额外试用

我有以下python代码

patience = 5
early_stop_val_loss = EarlyStopping(monitor="val_loss",mode="min",verbose=1,patience=patience)

reg = ak.StructuredDataRegressor(
    overwrite=True,max_trials=3,)

reg.fit(
    x=df_train_X,y=df_train_y,epochs=20,callbacks=[early_stop_val_loss]
)

我的“ max_trials”是3,但是在输出中看到的实际路径是4。为什么总是在末尾有一条额外的路径,其val_loss总是大于找到的最佳模型?是否需要某种平均水平?如果是,为什么?无论如何,我都会使用最好的模型。

我也使用了提前停止,但是最后一次试用实际上从未停止过,而其他3个“常规”试验则在适当的时候提前停止了。

以下是参考输出

2020-09-19 08:26:21.006490: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (onednN)to use the following cpu instructions in performance-critical operations:  AVX2 FMA
To enable them in other operations,rebuild TensorFlow with the appropriate compiler flags.
2020-09-19 08:26:21.064340: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x7fd698131740 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2020-09-19 08:26:21.064362: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): Host,Default Version

Search: Running Trial #1

Hyperparameter      |Value     |Best Value So Far   
structured_data_block_1/normalize|False     |?                   
structured_data_block_1/dense_block_1/num_layers|2         |?                   
structured_data_block_1/dense_block_1/use_batchnorm|False     |?                   
structured_data_block_1/dense_block_1/dropout|0         |?                   
structured_data_block_1/dense_block_1/units_0|32        |?                   
structured_data_block_1/dense_block_1/units_1|32        |?                   
regression_head_1/dropout|0.25      |?                   
optimizer           |adam      |?                   
learning_rate       |0.001     |?                   

Epoch 1/20
465/465 [==============================] - 1s 2ms/step - loss: 5813.7134 - mean_squared_error: 5813.7134 - val_loss: 51.8876 - val_mean_squared_error: 51.8876
Epoch 2/20
465/465 [==============================] - 1s 1ms/step - loss: 38.5871 - mean_squared_error: 38.5871 - val_loss: 9.9384 - val_mean_squared_error: 9.9384
Epoch 3/20
465/465 [==============================] - 1s 1ms/step - loss: 10.2094 - mean_squared_error: 10.2094 - val_loss: 6.6585 - val_mean_squared_error: 6.6585
Epoch 4/20
465/465 [==============================] - 1s 1ms/step - loss: 5.4702 - mean_squared_error: 5.4702 - val_loss: 6.4847 - val_mean_squared_error: 6.4847
Epoch 5/20
465/465 [==============================] - 1s 1ms/step - loss: 4.0269 - mean_squared_error: 4.0269 - val_loss: 4.8121 - val_mean_squared_error: 4.8121
Epoch 6/20
465/465 [==============================] - 1s 1ms/step - loss: 3.3543 - mean_squared_error: 3.3543 - val_loss: 4.8146 - val_mean_squared_error: 4.8146
Epoch 7/20
465/465 [==============================] - 1s 1ms/step - loss: 2.9394 - mean_squared_error: 2.9394 - val_loss: 4.4131 - val_mean_squared_error: 4.4131
Epoch 8/20
465/465 [==============================] - 1s 1ms/step - loss: 2.7398 - mean_squared_error: 2.7398 - val_loss: 4.2377 - val_mean_squared_error: 4.2377
Epoch 9/20
465/465 [==============================] - 1s 1ms/step - loss: 2.5196 - mean_squared_error: 2.5196 - val_loss: 4.0354 - val_mean_squared_error: 4.0354
Epoch 10/20
465/465 [==============================] - 1s 1ms/step - loss: 2.3083 - mean_squared_error: 2.3083 - val_loss: 3.9004 - val_mean_squared_error: 3.9004
Epoch 11/20
465/465 [==============================] - 1s 1ms/step - loss: 2.1087 - mean_squared_error: 2.1087 - val_loss: 3.8676 - val_mean_squared_error: 3.8676
Epoch 12/20
465/465 [==============================] - 1s 1ms/step - loss: 1.9194 - mean_squared_error: 1.9194 - val_loss: 3.7441 - val_mean_squared_error: 3.7441
Epoch 13/20
465/465 [==============================] - 1s 1ms/step - loss: 1.7461 - mean_squared_error: 1.7461 - val_loss: 3.4682 - val_mean_squared_error: 3.4682
Epoch 14/20
465/465 [==============================] - 1s 1ms/step - loss: 1.6202 - mean_squared_error: 1.6202 - val_loss: 3.1718 - val_mean_squared_error: 3.1718
Epoch 15/20
465/465 [==============================] - 1s 1ms/step - loss: 1.4932 - mean_squared_error: 1.4932 - val_loss: 2.8599 - val_mean_squared_error: 2.8599
Epoch 16/20
465/465 [==============================] - 0s 1ms/step - loss: 1.4384 - mean_squared_error: 1.4384 - val_loss: 2.5463 - val_mean_squared_error: 2.5463
Epoch 17/20
465/465 [==============================] - 1s 1ms/step - loss: 1.3569 - mean_squared_error: 1.3569 - val_loss: 2.2544 - val_mean_squared_error: 2.2544
Epoch 18/20
465/465 [==============================] - 1s 1ms/step - loss: 1.3003 - mean_squared_error: 1.3003 - val_loss: 2.1111 - val_mean_squared_error: 2.1111
Epoch 19/20
465/465 [==============================] - 1s 1ms/step - loss: 1.2799 - mean_squared_error: 1.2799 - val_loss: 2.0212 - val_mean_squared_error: 2.0212
Epoch 20/20
465/465 [==============================] - 1s 1ms/step - loss: 1.2712 - mean_squared_error: 1.2712 - val_loss: 1.9859 - val_mean_squared_error: 1.9859

Trial 1 Complete [00h 00m 12s]
val_loss: 1.9859470129013062

Best val_loss So Far: 1.9859470129013062
Total elapsed time: 00h 00m 12s

Search: Running Trial #2

Hyperparameter      |Value     |Best Value So Far   
structured_data_block_1/normalize|False     |False               
structured_data_block_1/dense_block_1/num_layers|2         |2                   
structured_data_block_1/dense_block_1/use_batchnorm|False     |False               
structured_data_block_1/dense_block_1/dropout|0         |0                   
structured_data_block_1/dense_block_1/units_0|32        |32                  
structured_data_block_1/dense_block_1/units_1|32        |32                  
regression_head_1/dropout|0.5       |0.25                
optimizer           |adam      |adam                
learning_rate       |0.001     |0.001               

Epoch 1/20
465/465 [==============================] - 1s 1ms/step - loss: 7586.9214 - mean_squared_error: 7586.9214 - val_loss: 20.4877 - val_mean_squared_error: 20.4877
Epoch 2/20
465/465 [==============================] - 1s 1ms/step - loss: 22.8722 - mean_squared_error: 22.8722 - val_loss: 10.3112 - val_mean_squared_error: 10.3112
Epoch 3/20
465/465 [==============================] - 1s 1ms/step - loss: 9.5569 - mean_squared_error: 9.5569 - val_loss: 8.0337 - val_mean_squared_error: 8.0337
Epoch 4/20
465/465 [==============================] - 1s 1ms/step - loss: 4.6928 - mean_squared_error: 4.6928 - val_loss: 7.0247 - val_mean_squared_error: 7.0247
Epoch 5/20
465/465 [==============================] - 1s 1ms/step - loss: 4.8696 - mean_squared_error: 4.8696 - val_loss: 6.3435 - val_mean_squared_error: 6.3435
Epoch 6/20
465/465 [==============================] - 1s 1ms/step - loss: 3.7421 - mean_squared_error: 3.7421 - val_loss: 5.6629 - val_mean_squared_error: 5.6629
Epoch 7/20
465/465 [==============================] - 1s 1ms/step - loss: 3.4249 - mean_squared_error: 3.4249 - val_loss: 5.2400 - val_mean_squared_error: 5.2400
Epoch 8/20
465/465 [==============================] - 1s 1ms/step - loss: 3.1833 - mean_squared_error: 3.1833 - val_loss: 4.8870 - val_mean_squared_error: 4.8870
Epoch 9/20
465/465 [==============================] - 1s 1ms/step - loss: 3.4340 - mean_squared_error: 3.4340 - val_loss: 4.3948 - val_mean_squared_error: 4.3948
Epoch 10/20
465/465 [==============================] - 1s 1ms/step - loss: 2.6573 - mean_squared_error: 2.6573 - val_loss: 4.0909 - val_mean_squared_error: 4.0909
Epoch 11/20
465/465 [==============================] - 1s 1ms/step - loss: 2.3987 - mean_squared_error: 2.3987 - val_loss: 3.9018 - val_mean_squared_error: 3.9018
Epoch 12/20
465/465 [==============================] - 1s 1ms/step - loss: 2.1663 - mean_squared_error: 2.1663 - val_loss: 3.5732 - val_mean_squared_error: 3.5732
Epoch 13/20
465/465 [==============================] - 1s 1ms/step - loss: 1.9187 - mean_squared_error: 1.9187 - val_loss: 3.2992 - val_mean_squared_error: 3.2992
Epoch 14/20
465/465 [==============================] - 0s 1ms/step - loss: 1.8088 - mean_squared_error: 1.8088 - val_loss: 3.0437 - val_mean_squared_error: 3.0437
Epoch 15/20
465/465 [==============================] - 1s 1ms/step - loss: 1.5896 - mean_squared_error: 1.5896 - val_loss: 2.7551 - val_mean_squared_error: 2.7551
Epoch 16/20
465/465 [==============================] - 0s 1ms/step - loss: 1.4675 - mean_squared_error: 1.4675 - val_loss: 2.5052 - val_mean_squared_error: 2.5052
Epoch 17/20
465/465 [==============================] - 1s 1ms/step - loss: 1.3750 - mean_squared_error: 1.3750 - val_loss: 2.3644 - val_mean_squared_error: 2.3644
Epoch 18/20
465/465 [==============================] - 1s 1ms/step - loss: 1.3246 - mean_squared_error: 1.3246 - val_loss: 2.1681 - val_mean_squared_error: 2.1681
Epoch 19/20
465/465 [==============================] - 1s 1ms/step - loss: 1.2935 - mean_squared_error: 1.2935 - val_loss: 2.0592 - val_mean_squared_error: 2.0592
Epoch 20/20
465/465 [==============================] - 1s 1ms/step - loss: 1.2762 - mean_squared_error: 1.2762 - val_loss: 1.9927 - val_mean_squared_error: 1.9927

Trial 2 Complete [00h 00m 12s]
val_loss: 1.9927339553833008

Best val_loss So Far: 1.9859470129013062
Total elapsed time: 00h 00m 24s

Search: Running Trial #3

Hyperparameter      |Value     |Best Value So Far   
structured_data_block_1/normalize|False     |False               
structured_data_block_1/dense_block_1/num_layers|2         |2                   
structured_data_block_1/dense_block_1/use_batchnorm|True      |False               
structured_data_block_1/dense_block_1/dropout|0         |0                   
structured_data_block_1/dense_block_1/units_0|32        |32                  
structured_data_block_1/dense_block_1/units_1|32        |32                  
regression_head_1/dropout|0.25      |0.25                
optimizer           |adam      |adam                
learning_rate       |0.001     |0.001               

Epoch 1/20
465/465 [==============================] - 1s 3ms/step - loss: 1.9166 - mean_squared_error: 1.9166 - val_loss: 2.1730 - val_mean_squared_error: 2.1730
Epoch 2/20
465/465 [==============================] - 1s 2ms/step - loss: 1.5008 - mean_squared_error: 1.5008 - val_loss: 1.7111 - val_mean_squared_error: 1.7111
Epoch 3/20
465/465 [==============================] - 1s 1ms/step - loss: 1.3942 - mean_squared_error: 1.3942 - val_loss: 1.8622 - val_mean_squared_error: 1.8622
Epoch 4/20
465/465 [==============================] - 1s 1ms/step - loss: 1.3493 - mean_squared_error: 1.3493 - val_loss: 1.7135 - val_mean_squared_error: 1.7135
Epoch 5/20
465/465 [==============================] - 1s 2ms/step - loss: 1.3094 - mean_squared_error: 1.3094 - val_loss: 1.8657 - val_mean_squared_error: 1.8657
Epoch 6/20
465/465 [==============================] - 1s 2ms/step - loss: 1.2928 - mean_squared_error: 1.2928 - val_loss: 1.8281 - val_mean_squared_error: 1.8281
Epoch 7/20
465/465 [==============================] - 1s 1ms/step - loss: 1.2559 - mean_squared_error: 1.2559 - val_loss: 1.7427 - val_mean_squared_error: 1.7427
Epoch 00007: early stopping

Trial 3 Complete [00h 00m 07s]
val_loss: 1.7111293077468872

Best val_loss So Far: 1.7111293077468872
Total elapsed time: 00h 00m 31s
Epoch 1/20
581/581 [==============================] - 1s 1ms/step - loss: 2.9020 - mean_squared_error: 2.9020
Epoch 2/20
581/581 [==============================] - 1s 1ms/step - loss: 1.6696 - mean_squared_error: 1.6696
Epoch 3/20
581/581 [==============================] - 1s 1ms/step - loss: 1.5727 - mean_squared_error: 1.5727
Epoch 4/20
581/581 [==============================] - 1s 1ms/step - loss: 1.5077 - mean_squared_error: 1.5077
Epoch 5/20
581/581 [==============================] - 1s 1ms/step - loss: 1.4542 - mean_squared_error: 1.4542
Epoch 6/20
581/581 [==============================] - 1s 1ms/step - loss: 1.4276 - mean_squared_error: 1.4276
Epoch 7/20
581/581 [==============================] - 1s 1ms/step - loss: 1.4152 - mean_squared_error: 1.4152
Epoch 8/20
581/581 [==============================] - 1s 1ms/step - loss: 1.4054 - mean_squared_error: 1.4054
Epoch 9/20
581/581 [==============================] - 1s 1ms/step - loss: 1.3798 - mean_squared_error: 1.3798
Epoch 10/20
581/581 [==============================] - 1s 1ms/step - loss: 1.3774 - mean_squared_error: 1.3774
Epoch 11/20
581/581 [==============================] - 1s 1ms/step - loss: 1.3597 - mean_squared_error: 1.3597
Epoch 12/20
581/581 [==============================] - 1s 1ms/step - loss: 1.3406 - mean_squared_error: 1.3406
Epoch 13/20
581/581 [==============================] - 1s 1ms/step - loss: 1.3290 - mean_squared_error: 1.3290
Epoch 14/20
581/581 [==============================] - 1s 1ms/step - loss: 1.3214 - mean_squared_error: 1.3214
Epoch 15/20
581/581 [==============================] - 1s 1ms/step - loss: 1.3295 - mean_squared_error: 1.3295
Epoch 16/20
581/581 [==============================] - 1s 1ms/step - loss: 1.3116 - mean_squared_error: 1.3116
Epoch 17/20
581/581 [==============================] - 1s 1ms/step - loss: 1.3081 - mean_squared_error: 1.3081
Epoch 18/20
581/581 [==============================] - 1s 1ms/step - loss: 1.2995 - mean_squared_error: 1.2995
Epoch 19/20
581/581 [==============================] - 1s 1ms/step - loss: 1.2924 - mean_squared_error: 1.2924
Epoch 20/20
581/581 [==============================] - 1s 1ms/step - loss: 1.2921 - mean_squared_error: 1.2921
65/65 [==============================] - 0s 846us/step - loss: 3.1227 - mean_squared_error: 3.1227
AAA evaluation: [3.122745990753174,3.122745990753174]
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.iter
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.beta_1
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.beta_2
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.decay
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.learning_rate
WARNING:tensorflow:A checkpoint was restored (e.g. tf.train.Checkpoint.restore or tf.keras.Model.load_weights) but not all checkpointed values were used. See above for specific issues. Use expect_partial() on the load status object,e.g. tf.train.Checkpoint.restore(...).expect_partial(),to silence these warnings,or use assert_consumed() to make the check explicit. See https://www.tensorflow.org/guide/checkpoint#loading_mechanics for details.

Process finished with exit code 0

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。

相关推荐


Selenium Web驱动程序和Java。元素在(x,y)点处不可单击。其他元素将获得点击?
Python-如何使用点“。” 访问字典成员?
Java 字符串是不可变的。到底是什么意思?
Java中的“ final”关键字如何工作?(我仍然可以修改对象。)
“loop:”在Java代码中。这是什么,为什么要编译?
java.lang.ClassNotFoundException:sun.jdbc.odbc.JdbcOdbcDriver发生异常。为什么?
这是用Java进行XML解析的最佳库。
Java的PriorityQueue的内置迭代器不会以任何特定顺序遍历数据结构。为什么?
如何在Java中聆听按键时移动图像。
Java“Program to an interface”。这是什么意思?
Java在半透明框架/面板/组件上重新绘画。
Java“ Class.forName()”和“ Class.forName()。newInstance()”之间有什么区别?
在此环境中不提供编译器。也许是在JRE而不是JDK上运行?
Java用相同的方法在一个类中实现两个接口。哪种接口方法被覆盖?
Java 什么是Runtime.getRuntime()。totalMemory()和freeMemory()?
java.library.path中的java.lang.UnsatisfiedLinkError否*****。dll
JavaFX“位置是必需的。” 即使在同一包装中
Java 导入两个具有相同名称的类。怎么处理?
Java 是否应该在HttpServletResponse.getOutputStream()/。getWriter()上调用.close()?
Java RegEx元字符(。)和普通点?