使用 skopt 优化超参数 hidden_layer_size MLPClassifier
Optimize hyperparameters hidden_layer_size MLPClassifier with skopt
如何使用来自 sklearn 的 MLPClassifier
和 skopt
优化神经网络中的层数和隐藏层大小?
通常我会指定我的 space 类似:
Space([Integer(name = 'alpha_2', low = 1, high = 2),
Real(10**-5, 10**0, "log-uniform", name='alpha_2')])
(假设超参数 alpha_1
和 alpha_2
)。
通过 sklearn 中的神经网络实现,我需要调整 hidden_layer_sizes
这是一个元组:
hidden_layer_sizes : tuple, length = n_layers - 2, default=(100,)
The ith element represents the number of neurons in the ith
hidden layer.
我如何在 Space
中表示它?
如果您使用 gp_minimize
,您可以将隐藏层数和每层神经元数作为参数包含在 Space
中。在 objective 函数的定义中,您可以手动创建超参数 hidden_layer_sizes
。
这是来自 scikit-optimize homepage 的示例,现在使用 MLPRegressor
:
import numpy as np
from sklearn.datasets import load_boston
from sklearn.neural_network import MLPRegressor
from sklearn.model_selection import cross_val_score
from skopt.space import Real, Integer, Categorical
from skopt.utils import use_named_args
from skopt import gp_minimize
boston = load_boston()
X, y = boston.data, boston.target
n_features = X.shape[1]
reg = MLPRegressor(random_state=0)
space=[
Categorical(['tanh','relu'],name='activation'),
Integer(1,4,name='n_hidden_layer'),
Integer(200,2000,name='n_neurons_per_layer')]
@use_named_args(space)
def objective(**params):
n_neurons=params['n_neurons_per_layer']
n_layers=params['n_hidden_layer']
# create the hidden layers as a tuple with length n_layers and n_neurons per layer
params['hidden_layer_sizes']=(n_neurons,)*n_layers
# the parameters are deleted to avoid an error from the MLPRegressor
params.pop('n_neurons_per_layer')
params.pop('n_hidden_layer')
reg.set_params(**params)
return -np.mean(cross_val_score(reg, X, y, cv=5, n_jobs=-1,
scoring="neg_mean_absolute_error"))
res_gp = gp_minimize(objective, space, n_calls=50, random_state=0)
如何使用来自 sklearn 的 MLPClassifier
和 skopt
优化神经网络中的层数和隐藏层大小?
通常我会指定我的 space 类似:
Space([Integer(name = 'alpha_2', low = 1, high = 2),
Real(10**-5, 10**0, "log-uniform", name='alpha_2')])
(假设超参数 alpha_1
和 alpha_2
)。
通过 sklearn 中的神经网络实现,我需要调整 hidden_layer_sizes
这是一个元组:
hidden_layer_sizes : tuple, length = n_layers - 2, default=(100,) The ith element represents the number of neurons in the ith hidden layer.
我如何在 Space
中表示它?
如果您使用 gp_minimize
,您可以将隐藏层数和每层神经元数作为参数包含在 Space
中。在 objective 函数的定义中,您可以手动创建超参数 hidden_layer_sizes
。
这是来自 scikit-optimize homepage 的示例,现在使用 MLPRegressor
:
import numpy as np
from sklearn.datasets import load_boston
from sklearn.neural_network import MLPRegressor
from sklearn.model_selection import cross_val_score
from skopt.space import Real, Integer, Categorical
from skopt.utils import use_named_args
from skopt import gp_minimize
boston = load_boston()
X, y = boston.data, boston.target
n_features = X.shape[1]
reg = MLPRegressor(random_state=0)
space=[
Categorical(['tanh','relu'],name='activation'),
Integer(1,4,name='n_hidden_layer'),
Integer(200,2000,name='n_neurons_per_layer')]
@use_named_args(space)
def objective(**params):
n_neurons=params['n_neurons_per_layer']
n_layers=params['n_hidden_layer']
# create the hidden layers as a tuple with length n_layers and n_neurons per layer
params['hidden_layer_sizes']=(n_neurons,)*n_layers
# the parameters are deleted to avoid an error from the MLPRegressor
params.pop('n_neurons_per_layer')
params.pop('n_hidden_layer')
reg.set_params(**params)
return -np.mean(cross_val_score(reg, X, y, cv=5, n_jobs=-1,
scoring="neg_mean_absolute_error"))
res_gp = gp_minimize(objective, space, n_calls=50, random_state=0)