如何使用Flann匹配之间的关系来确定合理的单应性?

And*_*rew 3 opencv image homography flann flannbasedmatcher

我有一张全景图像,以及在该全景图像中看到的较小的建筑物图像.我想要做的是识别那个较小图像中的建筑物是否在该全景图像中,以及这两个图像是如何对齐的.

对于第一个示例,我使用的是全景图像的裁剪版本,因此像素是相同的.

import cv2
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
import math

# Load images
cwImage = cv2.imread('cw1.jpg',0)
panImage = cv2.imread('pan1.jpg',0)

# Prepare for SURF image analysis
surf = cv2.xfeatures2d.SURF_create(4000)

# Find keypoints and point descriptors for both images
cwKeypoints, cwDescriptors = surf.detectAndCompute(cwImage, None)
panKeypoints, panDescriptors = surf.detectAndCompute(panImage, None)
Run Code Online (Sandbox Code Playgroud)

在此输入图像描述

在此输入图像描述

然后我使用OpenCV的FlannBasedMatcher来查找两个图像之间的良好匹配:

FLANN_INDEX_KDTREE = 0
index_params = dict(algorithm=FLANN_INDEX_KDTREE, trees=5)
search_params = dict(checks=50)
flann = cv2.FlannBasedMatcher(index_params, search_params)

# Find matches between the descriptors
matches = flann.knnMatch(cwDescriptors, panDescriptors, k=2)

good = []

for m, n in matches:
  if m.distance < 0.7 * n.distance:
    good.append(m)
Run Code Online (Sandbox Code Playgroud)

在此输入图像描述

所以你可以看到,在这个例子中,它完美匹配图像之间的点.那么我找到单应性,并应用透视扭曲:

cwPoints = np.float32([cwKeypoints[m.queryIdx].pt for m in good
                          ]).reshape(-1, 1, 2)
panPoints = np.float32([panKeypoints[m.trainIdx].pt for m in good
                          ]).reshape(-1, 1, 2)
h, status = cv2.findHomography(cwPoints, panPoints)

warpImage = cv2.warpPerspective(cwImage, h, (panImage.shape[1], panImage.shape[0]))
Run Code Online (Sandbox Code Playgroud)

在此输入图像描述

结果是它将较小的图像完美地放置在较大的图像中.

现在,我想这样做,其中较小的图像不是较大图像的像素完美版本.

对于新的较小图像,关键点如下所示:

在此输入图像描述

你可以看到,在某些情况下,它匹配正确,在某些情况下它没有.

如果我打电话findHomography给这些比赛,它会考虑所有这些数据点,并提出一个非感性的扭曲视角,因为它是基于正确的匹配和不正确的匹配.

在此输入图像描述

我正在寻找的是在检测好匹配和调用之间缺少一步,在findHomography那里我可以查看匹配之间的关系,并确定哪些匹配是正确的.

我想知道OpenCV中是否有一个函数我应该关注这一步,或者如果这是我需要自己解决的问题,如果是这样我应该怎么做呢?

Kin*_*t 金 7

去年我写了一篇关于在场景中寻找对象的博客(2017.11.11).也许有帮助.链接在这里.https://zhuanlan.zhihu.com/p/30936804

环境:OpenCV 3.3 + Python 3.5


找到匹配:

在此输入图像描述

场景中找到的对象:

在此输入图像描述


代码:

#!/usr/bin/python3
# 2017.11.11 01:44:37 CST
# 2017.11.12 00:09:14 CST
"""
??Sift??????????????????
"""

import cv2
import numpy as np
MIN_MATCH_COUNT = 4

imgname1 = "box.png"
imgname2 = "box_in_scene.png"

## (1) prepare data
img1 = cv2.imread(imgname1)
img2 = cv2.imread(imgname2)
gray1 = cv2.cvtColor(img1, cv2.COLOR_BGR2GRAY)
gray2 = cv2.cvtColor(img2, cv2.COLOR_BGR2GRAY)


## (2) Create SIFT object
sift = cv2.xfeatures2d.SIFT_create()

## (3) Create flann matcher
matcher = cv2.FlannBasedMatcher(dict(algorithm = 1, trees = 5), {})

## (4) Detect keypoints and compute keypointer descriptors
kpts1, descs1 = sift.detectAndCompute(gray1,None)
kpts2, descs2 = sift.detectAndCompute(gray2,None)

## (5) knnMatch to get Top2
matches = matcher.knnMatch(descs1, descs2, 2)
# Sort by their distance.
matches = sorted(matches, key = lambda x:x[0].distance)

## (6) Ratio test, to get good matches.
good = [m1 for (m1, m2) in matches if m1.distance < 0.7 * m2.distance]

canvas = img2.copy()

## (7) find homography matrix
## ??????????????4???
if len(good)>MIN_MATCH_COUNT:
    ## ???????????
    ## (queryIndex for the small object, trainIndex for the scene )
    src_pts = np.float32([ kpts1[m.queryIdx].pt for m in good ]).reshape(-1,1,2)
    dst_pts = np.float32([ kpts2[m.trainIdx].pt for m in good ]).reshape(-1,1,2)
    ## find homography matrix in cv2.RANSAC using good match points
    M, mask = cv2.findHomography(src_pts, dst_pts, cv2.RANSAC,5.0)
    ## ????????????????????
    #matchesMask2 = mask.ravel().tolist()
    ## ???1?????????2????????
    h,w = img1.shape[:2]
    pts = np.float32([ [0,0],[0,h-1],[w-1,h-1],[w-1,0] ]).reshape(-1,1,2)
    dst = cv2.perspectiveTransform(pts,M)
    ## ????
    cv2.polylines(canvas,[np.int32(dst)],True,(0,255,0),3, cv2.LINE_AA)
else:
    print( "Not enough matches are found - {}/{}".format(len(good),MIN_MATCH_COUNT))


## (8) drawMatches
matched = cv2.drawMatches(img1,kpts1,canvas,kpts2,good,None)#,**draw_params)

## (9) Crop the matched region from scene
h,w = img1.shape[:2]
pts = np.float32([ [0,0],[0,h-1],[w-1,h-1],[w-1,0] ]).reshape(-1,1,2)
dst = cv2.perspectiveTransform(pts,M)
perspectiveM = cv2.getPerspectiveTransform(np.float32(dst),pts)
found = cv2.warpPerspective(img2,perspectiveM,(w,h))

## (10) save and display
cv2.imwrite("matched.png", matched)
cv2.imwrite("found.png", found)
cv2.imshow("matched", matched);
cv2.imshow("found", found);
cv2.waitKey();cv2.destroyAllWindows()
Run Code Online (Sandbox Code Playgroud)