如何将成本,grad作为scipy的fmin_cg函数的元组返回

Cur*_*ous 7 python scipy fminsearch

我如何让scipy fmin_cg使用一个返回costgradient作为元组的函数?f成本和fprime梯度的问题在于,我可能必须执行两次(非常昂贵)的操作,grad并且cost计算得到.此外,在它们之间共享变量可能很麻烦.

然而,在Matlab中,fmin_cg使用一个函数将成本和梯度作为元组返回.我不明白为什么scipy fmin_cg不能提供这样的便利.

提前致谢...

Fre*_*Foo 6

您可以使用scipy.optimize.minimizejac=True.如果由于某种原因这不是一个选项,那么你可以看看它如何处理这种情况:

class MemoizeJac(object):
    """ Decorator that caches the value gradient of function each time it
    is called. """
    def __init__(self, fun):
        self.fun = fun
        self.jac = None
        self.x = None

    def __call__(self, x, *args):
        self.x = numpy.asarray(x).copy()
        fg = self.fun(x, *args)
        self.jac = fg[1]
        return fg[0]

    def derivative(self, x, *args):
        if self.jac is not None and numpy.alltrue(x == self.x):
            return self.jac
        else:
            self(x, *args)
            return self.jac
Run Code Online (Sandbox Code Playgroud)

此类包装一个返回函数值和渐变的函数,保留单元素缓存并检查它是否已知道其结果.用法:

fmemo = MemoizeJac(f, fprime)
xopt = fmin_cg(fmemo, x0, fmemo.derivative)
Run Code Online (Sandbox Code Playgroud)

关于这段代码的奇怪之处在于它假设f总是先调用fprime(但不是每个f调用都跟着fprime调用).我不确定是否scipy.optimize真的保证了这一点,但代码可以很容易地适应不做出这个假设.以上的稳健版本(未经测试):

class MemoizeJac(object):
    def __init__(self, fun):
        self.fun = fun
        self.value, self.jac = None, None
        self.x = None

    def _compute(self, x, *args):
        self.x = numpy.asarray(x).copy()
        self.value, self.jac = self.fun(x, *args)

    def __call__(self, x, *args):
        if self.value is not None and numpy.alltrue(x == self.x):
            return self.value
        else:
            self._compute(x, *args)
            return self.value

    def derivative(self, x, *args):
        if self.jac is not None and numpy.alltrue(x == self.x):
            return self.jac
        else:
            self._compute(x, *args)
            return self.jac
Run Code Online (Sandbox Code Playgroud)