ksc*_*ttz 17 python optimization numpy machine-learning scipy
我有一个计算机视觉算法我想用scipy.optimize.minimize调整.现在我只想调整两个参数,但参数的数量可能最终增长所以我想使用一种可以进行高维渐变搜索的技术.SciPy中的Nelder-Mead实现似乎非常合适.
我得到了所有设置的代码,但似乎最小化函数确实想要使用步长大于1的浮点值.当前参数集都是整数,其中一个步长为1和另一个步长为2(即该值必须为奇数,如果不是我想要优化的东西将其转换为奇数).大致一个参数是以像素为单位的窗口大小,另一个参数是阈值(从0到255的值).
为了它的价值,我正在使用git repo中的scipy.有谁知道如何告诉scipy为每个参数使用特定的步长?有什么方法可以滚动我自己的渐变功能吗?是否有可以帮助我的scipy旗帜?我知道这可以通过简单的参数扫描完成,但我最终希望将此代码应用于更大的参数集.
代码本身很简单:
import numpy as np
from scipy.optimize import minimize
from ScannerUtil import straightenImg
import bson
def doSingleIteration(parameters):
# do some machine vision magic
# return the difference between my value and the truth value
parameters = np.array([11,10])
res = minimize( doSingleIteration, parameters, method='Nelder-Mead',options={'xtol': 1e-2, 'disp': True,'ftol':1.0,}) #not sure if these params do anything
print "~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"
print res
Run Code Online (Sandbox Code Playgroud)
这就是我的输出的样子.正如你所看到的那样,我们正在重复大量的运行并且没有在最小化中获得任何结果.
*+++++++++++++++++++++++++++++++++++++++++
[ 11. 10.] <-- Output from scipy minimize
{'block_size': 11, 'degree': 10} <-- input to my algorithm rounded and made int
+++++++++++++++++++++++++++++++++++++++++
120 <-- output of the function I am trying to minimize
+++++++++++++++++++++++++++++++++++++++++
[ 11.55 10. ]
{'block_size': 11, 'degree': 10}
+++++++++++++++++++++++++++++++++++++++++
120
+++++++++++++++++++++++++++++++++++++++++
[ 11. 10.5]
{'block_size': 11, 'degree': 10}
+++++++++++++++++++++++++++++++++++++++++
120
+++++++++++++++++++++++++++++++++++++++++
[ 11.55 9.5 ]
{'block_size': 11, 'degree': 9}
+++++++++++++++++++++++++++++++++++++++++
120
+++++++++++++++++++++++++++++++++++++++++
[ 11.1375 10.25 ]
{'block_size': 11, 'degree': 10}
+++++++++++++++++++++++++++++++++++++++++
120
+++++++++++++++++++++++++++++++++++++++++
[ 11.275 10. ]
{'block_size': 11, 'degree': 10}
+++++++++++++++++++++++++++++++++++++++++
120
+++++++++++++++++++++++++++++++++++++++++
[ 11. 10.25]
{'block_size': 11, 'degree': 10}
+++++++++++++++++++++++++++++++++++++++++
120
+++++++++++++++++++++++++++++++++++++++++
[ 11.275 9.75 ]
{'block_size': 11, 'degree': 9}
+++++++++++++++++++++++++++++++++++++++++
120
+++++++++++++++++++++++++++++++++++++++++
~~~
SNIP
~~~
+++++++++++++++++++++++++++++++++++++++++
[ 11. 10.0078125]
{'block_size': 11, 'degree': 10}
+++++++++++++++++++++++++++++++++++++++++
120
Optimization terminated successfully.
Current function value: 120.000000
Iterations: 7
Function evaluations: 27
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
status: 0
nfev: 27
success: True
fun: 120.0
x: array([ 11., 10.])
message: 'Optimization terminated successfully.'
nit: 7*
Run Code Online (Sandbox Code Playgroud)
假设最小化的函数是任意复杂的(非线性),这通常是一个非常难的问题.除非您尝试所有可能的选项,否则无法保证最佳解决方案.我不知道是否有任何整数约束非线性优化器(有点怀疑)我会假设你知道Nelder-Mead如果它是一个连续的函数应该可以正常工作.
编辑:考虑来自@Dougal的评论我将在这里添加:首先设置一个粗略+精细的网格搜索,如果你觉得如果你的Nelder-Mead工作(并且收敛得更快),下面的点可能有帮助......
但也许有些观点有助于:
(1+0.05) * x0[j]
(对于j
所有维度,除非x0[j]
是0),您将在第一个评估步骤中看到.也许这些可以在更新的版本中提供,否则你可以只更改/复制scipy代码(它是纯python)并将其设置为更合理的东西.或者,如果您觉得更简单,请缩小所有输入变量,使(1 + 0.05)*x0具有合理的大小.