我在 matlab 中使用 svmtrain 和MLP 内核,如下所示:
mlp=svmtrain(train_data,train_label,'Kernel_Function','mlp','showplot',true);
Run Code Online (Sandbox Code Playgroud)
但我收到此错误:
??? Error using ==> svmtrain at 470
Unable to solve the optimization problem:
Exiting: the solution is unbounded and at infinity;
the constraints are not restrictive enough.
Run Code Online (Sandbox Code Playgroud)
是什么原因?我尝试了其他内核,没有任何错误。即使我尝试了svmtrain的答案- 无法解决优化问题如下:
options = optimset('maxiter',1000);
svmtrain(train_data,train_label,'Kernel_Function','mlp','Method','QP',...
'quadprog_opts',options);
Run Code Online (Sandbox Code Playgroud)
但我又遇到了同样的错误。我的训练集是一个简单的 45*2 数据集,包含 2 个类数据点。
这里的解决方案并没有真正解释任何事情。问题在于二次规划方法无法收敛于优化问题。正常的做法是增加迭代次数,但我已经在相同大小的数据上测试了这一点,迭代了 1,000,000 次,但它仍然无法收敛。
options = optimset('maxIter',1000000);
mlp = svmtrain(data,labels,'Kernel_Function','mlp','Method','QP',...
'quadprog_opts',options);
??? Error using ==> svmtrain at 576
Unable to solve the optimization problem:
Exiting: the solution is unbounded and at infinity;
the constraints are not restrictive enough.
Run Code Online (Sandbox Code Playgroud)
我的问题是:您使用二次规划而不是 SMO 进行优化有什么原因吗?使用 SMO 做完全相同的事情效果很好:
mlp = svmtrain(data,labels,'Kernel_Function','mlp','Method','SMO');
mlp =
SupportVectors: [40x2 double]
Alpha: [40x1 double]
Bias: 0.0404
KernelFunction: @mlp_kernel
KernelFunctionArgs: {}
GroupNames: [45x1 double]
SupportVectorIndices: [40x1 double]
ScaleData: [1x1 struct]
FigureHandles: []
Run Code Online (Sandbox Code Playgroud)