Python 和 HyperOpt:如何进行多进程网格搜索?

use*_*204 5 python machine-learning pandas hyperparameters grid-search

我正在尝试调整一些参数并且搜索空间非常大。到目前为止,我有 5 个维度,它可能会增加到大约 10 个。问题是,如果我能弄清楚如何对它进行多处理,我认为我可以获得显着的加速,但我找不到任何好的方法它。我正在使用hyperopt,但我不知道如何让它使用 1 个以上的内核。这是我没有所有不相关内容的代码:

from numpy    import random
from pandas   import DataFrame
from hyperopt import fmin, tpe, hp, Trials





def calc_result(x):

    huge_df = DataFrame(random.randn(100000, 5), columns=['A', 'B', 'C', 'D', 'E'])

    total = 0

    # Assume that I MUST iterate
    for idx_and_row in huge_df.iterrows():
        idx = idx_and_row[0]
        row = idx_and_row[1]


        # Assume there is no way to optimize here
        curr_sum = row['A'] * x['adjustment_1'] + \
                   row['B'] * x['adjustment_2'] + \
                   row['C'] * x['adjustment_3'] + \
                   row['D'] * x['adjustment_4'] + \
                   row['E'] * x['adjustment_5']


        total += curr_sum

    # In real life I want the total as high as possible, but for the minimizer, it has to negative a negative value
    total_as_neg = total * -1

    print(total_as_neg)

    return total_as_neg


space = {'adjustment_1': hp.quniform('adjustment_1', 0, 1, 0.001),
         'adjustment_2': hp.quniform('adjustment_2', 0, 1, 0.001),
         'adjustment_3': hp.quniform('adjustment_3', 0, 1, 0.001),
         'adjustment_4': hp.quniform('adjustment_4', 0, 1, 0.001),
         'adjustment_5': hp.quniform('adjustment_5', 0, 1, 0.001)}

trials = Trials()

best = fmin(fn        = calc_result,
            space     = space,
            algo      = tpe.suggest,
            max_evals = 20000,
            trials    = trials)
Run Code Online (Sandbox Code Playgroud)

到目前为止,我有 4 个内核,但我基本上可以根据需要获得尽可能多的内核。我怎样才能hyperopt使用 1 个以上的核心,或者是否有一个可以进行多进程的库?

ric*_*iaw 5

如果您有 Mac 或 Linux(或 Windows Linux 子系统),您可以添加大约 10 行代码来与ray. 如果您在此处通过最新的轮子安装 ray ,那么您只需稍作修改即可运行脚本,如下所示,以使用 HyperOpt 进行并行/分布式网格搜索。在高层次上,它fmin与 tpe.suggest 一起运行并以并行方式在内部创建一个 Trials 对象。

from numpy    import random
from pandas   import DataFrame
from hyperopt import fmin, tpe, hp, Trials


def calc_result(x, reporter):  # add a reporter param here

    huge_df = DataFrame(random.randn(100000, 5), columns=['A', 'B', 'C', 'D', 'E'])

    total = 0

    # Assume that I MUST iterate
    for idx_and_row in huge_df.iterrows():
        idx = idx_and_row[0]
        row = idx_and_row[1]


        # Assume there is no way to optimize here
        curr_sum = row['A'] * x['adjustment_1'] + \
                   row['B'] * x['adjustment_2'] + \
                   row['C'] * x['adjustment_3'] + \
                   row['D'] * x['adjustment_4'] + \
                   row['E'] * x['adjustment_5']


        total += curr_sum

    # In real life I want the total as high as possible, but for the minimizer, it has to negative a negative value
    # total_as_neg = total * -1

    # print(total_as_neg)

    # Ray will negate this by itself to feed into HyperOpt
    reporter(timesteps_total=1, episode_reward_mean=total)

    return total_as_neg


space = {'adjustment_1': hp.quniform('adjustment_1', 0, 1, 0.001),
         'adjustment_2': hp.quniform('adjustment_2', 0, 1, 0.001),
         'adjustment_3': hp.quniform('adjustment_3', 0, 1, 0.001),
         'adjustment_4': hp.quniform('adjustment_4', 0, 1, 0.001),
         'adjustment_5': hp.quniform('adjustment_5', 0, 1, 0.001)}

import ray
import ray.tune as tune
from ray.tune.hpo_scheduler import HyperOptScheduler

ray.init()
tune.register_trainable("calc_result", calc_result)
tune.run_experiments({"experiment": {
    "run": "calc_result",
    "repeat": 20000,
    "config": {"space": space}}}, scheduler=HyperOptScheduler())
Run Code Online (Sandbox Code Playgroud)


mrk*_*mrk 0

只是对你的问题做一些旁注。我最近也在做超参数搜索,如果你有自己的理由,请忽略我。

问题是你应该更喜欢随机搜索而不是网格搜索。

这是他们提出这一点的论文。

这里有一些解释:基本上随机搜索更好地分布在子特征上,网格搜索更好地分布在整个特征空间上,这就是为什么这感觉是可行的方法。

http://cs231n.github.io/neural-networks-3/ 这是图像的来源

图片来自这里