如何解决使用 GridSearchCV 优化 MLP 学习率
我正在尝试使用 GridSearchCV 调整 MLP 分类器的超参数,但面临以下问题:
/usr/local/lib/python3.7/dist-packages/sklearn/model_selection/_validation.py:536: FitFailedWarning: Estimator fit failed. The score on this train-test partition for these parameters will be set to nan.
Details:
ValueError: learning rate 0.01 is not supported.
FitFailedWarning)
/usr/local/lib/python3.7/dist-packages/sklearn/model_selection/_validation.py:536: FitFailedWarning: Estimator fit failed. The score on this train-test partition for these parameters will be set to nan.
Details:
ValueError: learning rate 0.02 is not supported
........
代码:
clf = MLPClassifier()
params= {
'hidden_layer_sizes': hidden_layers_generator(X,np.arange(1,17,1)),'solver': ['sgd'],'momentum': np.arange(0.1,1.1,0.1),'learning_rate': np.arange(0.01,1.01,0.01),'max_iter': np.arange(100,2100,100)}
grid = GridSearchCV(clf,params,cv=10,scoring='accuracy')
grid.fit(X,y)
grid_mean_scores = grid.cv_results_['mean_test_score']
pd.DataFrame(grid.cv_results_)[['mean_test_score','std_test_score','params']]
hidden_layers_generator的代码如下:
from itertools import combinations_with_replacement
def hidden_layers_generator(df,hidden_layers):
hd_sizes = []
for l in range(1,len(hidden_layers)):
comb = combinations_with_replacement(np.arange(1,len(df.columns),10),l)
hd_sizes.append(list(comb))
return hd_sizes
这是 X 和 y 数据帧的一小段:
X.head()
sl sw pl pw
0 5.1 3.5 1.4 0.2
1 4.9 3.0 1.4 0.2
2 4.7 3.2 1.3 0.2
3 4.6 3.1 1.5 0.2
4 5.0 3.6 1.4 0.2
y.head()
0 0
1 1
2 1
3 0
4 0
解决方法
如果您查看 MLPClassifier
的 documentation,您会发现 learning_rate
参数不是您所想的,而是一种调度程序。你想要的是 learning_rate_init
参数。所以在配置中更改这一行:
'learning_rate': np.arange(0.01,1.01,0.01),
到
'learning_rate_init': np.arange(0.01,
版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。