GridSearchCV 最佳模型 CV 历史

tru*_*mee 3 scikit-learn keras

我正在尝试使用 GridSearchCV 和 KerasRegressor 进行超参数搜索。Keras model.fit 函数本身允许使用历史对象查看“loss”和“val_loss”变量。

使用 GridSearchCV 时是否可以查看 'loss' 和 'val_loss' 变量。

这是我用来进行网格搜索的代码:

model = KerasRegressor(build_fn=create_model_gridsearch, verbose=0)
layers = [[16], [16,8]]
activations  =  ['relu' ]
optimizers = ['Adam']
param_grid = dict(layers=layers, activation=activations, input_dim=[X_train.shape[1]], output_dim=[Y_train.shape[1]], batch_size=specified_batch_size, epochs=num_of_epochs, optimizer=optimizers)
grid = GridSearchCV(estimator=model, param_grid=param_grid, scoring='neg_mean_squared_error', n_jobs=-1, verbose=1, cv=7)

grid_result = grid.fit(X_train, Y_train)

# summarize results
print("Best: %f using %s" % (grid_result.best_score_, grid_result.best_params_))
means = grid_result.cv_results_['mean_test_score']
stds = grid_result.cv_results_['std_test_score']
params = grid_result.cv_results_['params']
for mean, stdev, param in sorted(zip(means, stds, params), key=lambda x: x[0]):
    print("%f (%f) with: %r" % (mean, stdev, param))

def create_model_gridsearch(input_dim, output_dim, layers, activation, optimizer):
    model = Sequential()

    for i, nodes in enumerate(layers):
        if i == 0:
            model.add(Dense(nodes, input_dim=input_dim))
            model.add(Activation(activation))
        else:
            model.add(Dense(nodes))
            model.add(Activation(activation))
    model.add(Dense(output_dim, activation='linear'))

    model.compile(optimizer=optimizer, loss='mean_squared_error')

    return model
Run Code Online (Sandbox Code Playgroud)

如何获得最佳模型 grid_result.best_estimator_.model 的每个时期的训练和 CV 损失?

没有像 grid_result.best_estimator_.model.history.keys() 这样的变量

小智 5

历史隐藏得很好。我能够找到它

grid_result.best_estimator_.model.model.history.history
Run Code Online (Sandbox Code Playgroud)