Kyr*_*kos 8 performance hyperparameters optuna
我正在尝试使用 Optuna 调整额外的树分类器。
我在所有的试验中都收到这样的消息:
[W 2022-02-10 12:13:12,501] 试验 2 失败,因为值 None 无法转换为浮点数。
下面是我的代码。我所有的考验都会发生这种情况。谁能告诉我我做错了什么?
def objective(trial, X, y):
param = {
'verbose': trial.suggest_categorical('verbosity', [1]),
'random_state': trial.suggest_categorical('random_state', [RS]),
'n_estimators': trial.suggest_int('n_estimators', 100, 150),
'n_jobs': trial.suggest_categorical('n_jobs', [-1]),
}
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, shuffle=True, random_state=RS)
clf = ExtraTreesClassifier(**param)
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
acc = accuracy_score(y_pred, y_test)
print(f"Model Accuracy: {round(acc, 6)}")
print(f"Model Parameters: {param}")
print('='*50)
return`
study = optuna.create_study(
direction='maximize',
sampler=optuna.samplers.TPESampler(),
pruner=optuna.pruners.HyperbandPruner(),
study_name='ExtraTrees-Hyperparameter-Tuning')
func = lambda trial: objective(trial, X, y)
%%time
study.optimize(
func,
n_trials=100,
timeout=60,
gc_after_trial=True
)
Run Code Online (Sandbox Code Playgroud)
您的代码不完整。这是有关如何执行此操作的工作示例。我正在使用 optuna==2.10.0。
import optuna
from sklearn.ensemble import ExtraTreesClassifier
from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
X, y = make_classification(n_features=4) # Generate sample datasets
def objective(trial):
param = {
'random_state': trial.suggest_categorical('random_state', [0, 25, 100, None]),
'n_estimators': trial.suggest_int('n_estimators', 100, 150)
}
suggested_random_state = param['random_state'] # also use the suggested random state value in train_test_split()
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, shuffle=True, random_state=suggested_random_state)
clf = ExtraTreesClassifier(**param)
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
acc = accuracy_score(y_pred, y_test)
print(f"Model Accuracy: {round(acc, 6)}")
print(f"Model Parameters: {param}")
return acc # return our objective value
if __name__ == "__main__":
study = optuna.create_study(
direction="maximize",
sampler=optuna.samplers.TPESampler()
)
study.optimize(objective, n_trials=100)
print("Number of finished trials: {}".format(len(study.trials)))
print("Best trial:")
trial = study.best_trial
print(" Value: {}".format(trial.value))
print(" Params: ")
for key, value in trial.params.items():
print(" {}: {}".format(key, value))
Run Code Online (Sandbox Code Playgroud)
样本输出
...
[I 2022-02-22 00:40:32,688] Trial 97 finished with value: 0.75 and parameters: {'random_state': None, 'n_estimators': 149}. Best is trial 15 with value: 1.0.
Model Accuracy: 0.75
Model Parameters: {'random_state': None, 'n_estimators': 134}
[I 2022-02-22 00:40:32,844] Trial 98 finished with value: 0.75 and parameters: {'random_state': None, 'n_estimators': 134}. Best is trial 15 with value: 1.0.
Model Accuracy: 0.8
Model Parameters: {'random_state': None, 'n_estimators': 129}
[I 2022-02-22 00:40:33,002] Trial 99 finished with value: 0.8 and parameters: {'random_state': None, 'n_estimators': 129}. Best is trial 15 with value: 1.0.
Number of finished trials: 100
Best trial:
Value: 1.0
Params:
random_state: None
n_estimators: 137
Run Code Online (Sandbox Code Playgroud)
| 归档时间: |
|
| 查看次数: |
5513 次 |
| 最近记录: |