Optuna 从“外部”传递参数字典

Mat*_*hio 9 python dictionary class lightgbm optuna

我正在使用 Optuna 来优化一些目标函数。我想创建“包装”标准 Optuna 代码的自定义类。

举个例子,这是我的课程(它仍在进行中!):

class Optimizer(object):
    
    def __init__(self, param_dict, model, train_x, valid_x, train_y, valid_y):
        self.model = model
        self.param_dict = param_dict
        self.train_x, self.valid_x, self.train_y, self.valid_y = train_x, valid_x, train_y, valid_y
        
    def optimization_function(self, trial):
        self.dtrain = lgb.Dataset(self.train_x, label=self.train_y)
        gbm = lgb.train(param, dtrain)
        
        preds = gbm.predict(self.valid_x)
        pred_labels = np.rint(preds)
        accuracy = sklearn.metrics.accuracy_score(self.valid_y, pred_labels)
        return accuracy
    
    
    def optimize(self, direction, n_trials):
        study = optuna.create_study(direction = direction)
        study.optimize(self.optimization_function, n_trials = n_trials)    
        return study.best_trial
Run Code Online (Sandbox Code Playgroud)

我试图将 optuna 优化的所有“逻辑”包装在此类中,而不是每次都编写一些代码如下(来自文档):

import optuna


class Objective(object):
    def __init__(self, min_x, max_x):
        # Hold this implementation specific arguments as the fields of the class.
        self.min_x = min_x
        self.max_x = max_x

    def __call__(self, trial):
        # Calculate an objective value by using the extra arguments.
        x = trial.suggest_float("x", self.min_x, self.max_x)
        return (x - 2) ** 2


# Execute an optimization by using an `Objective` instance.
study = optuna.create_study()
study.optimize(Objective(-100, 100), n_trials=100)
Run Code Online (Sandbox Code Playgroud)

我想让我的代码“模块化”,并将所有内容合并到一个类中。我的最终目标是根据函数中给定的输入模型设置不同的优化函数“模板” __init__

所以,回到主要问题,我想从字典之外传递过来param。基本上我希望能够从类外部声明它并在函数中传递我的字典__init__

然而,Optuna 代码中常用的范围和分布取决于trial对象,所以我无法执行以下操作:

my_dict = {
    'objective': 'binary',
    'metric': 'binary_logloss',
    'verbosity': -1,
    'boosting_type': 'gbdt',
     # HERE I HAVE A DEPENDENCY FROM trial.suggest_loguniform, I can't declare the dictionary outside the objective function
    'lambda_l1': trial.suggest_loguniform('lambda_l1', 1e-8, 10.0),
    'lambda_l2': trial.suggest_loguniform('lambda_l2', 1e-8, 10.0),
    'num_leaves': trial.suggest_int('num_leaves', 2, 256),
    'feature_fraction': trial.suggest_uniform('feature_fraction', 0.4, 1.0),
    'bagging_fraction': trial.suggest_uniform('bagging_fraction', 0.4, 1.0),
    'bagging_freq': trial.suggest_int('bagging_freq', 1, 7),
    'min_child_samples': trial.suggest_int('min_child_samples', 5, 100),
} 
my_optimizer = Optimizer(my_dict, ..., ..., ..., ......)
best_result = my_optimizer.optimize('maximize', 100)
Run Code Online (Sandbox Code Playgroud)

有什么解决办法或解决方案可以通过这本字典吗?

小智 12

另一种更快的方法是使用 python 的内置函数,因为您试图让 Objective 接受多个参数functools.partial

from functools import partial

def objective(trial, param1, param2):

     pass


param1 = 2
param2 = 3
objective = partial(objective, param1 = param1, param2 = param2)
study.optimize(objective, n_trials = 100)
Run Code Online (Sandbox Code Playgroud)


小智 6

我不确定我是否理解你的问题;但你的意思是你想将一个字典传递给目标函数?

如果是的话,这对我有用,使用 lambda,来自 optuna 的常见问题解答:

import optuna

# Objective function that takes three arguments.
def objective(trial, min_x, max_x):
    x = trial.suggest_float("x", min_x, max_x)
    return (x - 2) ** 2


# Extra arguments.
min_x = -100
max_x = 100

# Execute an optimization by using the above objective function wrapped by `lambda`.
study = optuna.create_study()
study.optimize(lambda trial: objective(trial, min_x, max_x), n_trials=100)
Run Code Online (Sandbox Code Playgroud)


fer*_*rdy 0

怎么样,创建一个详细的字典传递给班级,然后重建它以用于试用建议类型。

代码

class Optimizer(object):    
    def __init__(self, param_dict):
        self.param_dic = param_dict

        self.objective = self.param_dic.get('objective', None)
        self.metric = self.param_dic.get('metric', None)

    def optimization_function(self, trial):
        suggested_param = {}  # param storage

        # int
        int_param = self.param_dic['param'].get('int', None)
        if int_param is not None:     
            for k, v in int_param.items():
                suggested = trial.suggest_int(k, v['low'], v['high'])
                suggested_param.update({k: suggested})

        # log
        loguniform_param = self.param_dic['param'].get('loguniform', None)
        if loguniform_param is not None:
            for k, v in loguniform_param.items():
                suggested = trial.suggest_loguniform(k, v['low'], v['high'])
                suggested_param.update({k: suggested})

        a = suggested_param.get('a', None)
        b = suggested_param.get('b', None)
        c = suggested_param.get('c', None)

        return a + b + 1.5*c


    def optimize(self, direction, n_trials):
        study = optuna.create_study(direction = direction)
        study.optimize(self.optimization_function, n_trials = n_trials)
        return study.best_trial


my_dict = {
    'objective': 'binary',
    'metric': 'binary_logloss',
    'param': {
        'int': {
            'a': {
                'low': 0,
                'high': 20
            },
            'b': {
                'low': 0,
                'high': 10
            }
        },
        'loguniform': {
            'c': {
                'low': 1e-8,
                'high': 10.0
            }
        }
    }
}

my_optimizer = Optimizer(my_dict)
best_result = my_optimizer.optimize('maximize', 100)
print(f'best param: {best_result.params}')
print(f'best value: {best_result.values}')
Run Code Online (Sandbox Code Playgroud)