Hag*_*ine 9 c# opencv image surf emgucv
我一直在尝试使用EMGU示例SURFFeature来确定图像是否在图像集合中.但我在理解如何确定是否找到匹配时遇到问题.
.........原始图像.............................. Scene_1(匹配)..... .................... Scene_2(不匹配)
...................
...................

我一直在查看文档并花了几个小时寻找可能的解决方案,如何确定图像是否相同.正如您在下面的图片中看到的那样,两者都匹配.

很明显,我试图找到的那个获得更多的匹配(连接线),但我如何在代码中检查这个?
问题:如何过滤出好的匹配?
我的目标是能够将输入图像(从网络摄像头捕获)与数据库中的图像集合进行比较.但在我将所有图像保存到数据库之前,我需要知道我可以将输入与哪些值进行比较.(例如,在数据库中保存objectKeypoints)
这是我的示例代码(匹配部分):
private void match_test()
{
long matchTime;
using (Mat modelImage = CvInvoke.Imread(@"images\input.jpg", LoadImageType.Grayscale))
using (Mat observedImage = CvInvoke.Imread(@"images\2.jpg", LoadImageType.Grayscale))
{
Mat result = DrawMatches.Draw(modelImage, observedImage, out matchTime);
//ImageViewer.Show(result, String.Format("Matched using {0} in {1} milliseconds", CudaInvoke.HasCuda ? "GPU" : "CPU", matchTime));
ib_output.Image = result;
label7.Text = String.Format("Matched using {0} in {1} milliseconds", CudaInvoke.HasCuda ? "GPU" : "CPU", matchTime);
}
}
public static void FindMatch(Mat modelImage, Mat observedImage, out long matchTime, out VectorOfKeyPoint modelKeyPoints, out VectorOfKeyPoint observedKeyPoints, VectorOfVectorOfDMatch matches, out Mat mask, out Mat homography)
{
int k = 2;
double uniquenessThreshold = 0.9;
double hessianThresh = 800;
Stopwatch watch;
homography = null;
modelKeyPoints = new VectorOfKeyPoint();
observedKeyPoints = new VectorOfKeyPoint();
using (UMat uModelImage = modelImage.ToUMat(AccessType.Read))
using (UMat uObservedImage = observedImage.ToUMat(AccessType.Read))
{
SURF surfCPU = new SURF(hessianThresh);
//extract features from the object image
UMat modelDescriptors = new UMat();
surfCPU.DetectAndCompute(uModelImage, null, modelKeyPoints, modelDescriptors, false);
watch = Stopwatch.StartNew();
// extract features from the observed image
UMat observedDescriptors = new UMat();
surfCPU.DetectAndCompute(uObservedImage, null, observedKeyPoints, observedDescriptors, false);
//Match the two SURF descriptors
BFMatcher matcher = new BFMatcher(DistanceType.L2);
matcher.Add(modelDescriptors);
matcher.KnnMatch(observedDescriptors, matches, k, null);
mask = new Mat(matches.Size, 1, DepthType.Cv8U, 1);
mask.SetTo(new MCvScalar(255));
Features2DToolbox.VoteForUniqueness(matches, uniquenessThreshold, mask);
int nonZeroCount = CvInvoke.CountNonZero(mask);
if (nonZeroCount >= 4)
{
nonZeroCount = Features2DToolbox.VoteForSizeAndOrientation(modelKeyPoints, observedKeyPoints,
matches, mask, 1.5, 20);
if (nonZeroCount >= 4)
homography = Features2DToolbox.GetHomographyMatrixFromMatchedFeatures(modelKeyPoints,
observedKeyPoints, matches, mask, 2);
}
watch.Stop();
}
matchTime = watch.ElapsedMilliseconds;
}
Run Code Online (Sandbox Code Playgroud)
我真的觉得我离解决方案不远了.希望有人可以帮助我
在退出时Features2DToolbox.GetHomographyMatrixFromMatchedFeatures,mask矩阵被更新为零,其中匹配是异常值(即,在计算的单应性下不能很好地对应).因此,CountNonZero再次呼叫mask应该给出匹配质量的指示.
我看到你想要将比赛分类为"好"或"差",而不是仅仅将多个比赛与单个图像进行比较; 从您问题中的示例看,似乎合理的阈值可能是输入图像中找到的关键点的1/4.您可能还需要一个绝对最小值,理由是如果没有一定数量的证据,您无法真正考虑一些好的匹配.所以,例如,像
bool FindMatch(...) {
bool goodMatch = false;
// ...
homography = Features2DToolbox.GetHomographyMatrixFromMatchedFeatures(...);
int nInliers = CvInvoke.CountNonZero(mask);
goodMatch = nInliers >= 10 && nInliers >= observedKeyPoints.size()/4;
// ...
return goodMatch;
}
Run Code Online (Sandbox Code Playgroud)
在计算机homography当然没有得到的分支上当然goodMatch只是在初始化时保持错误.数字10和1/4有点任意,取决于您的应用.
(警告:上述内容完全来源于阅读文档;我实际上没有尝试过.)