Jav*_*ved 45 python opencv image image-processing computer-vision
我只是在OpenCV中做一个特征检测的例子.此示例如下所示.它给了我以下错误
module'对象没有属性'drawMatches'
我检查了OpenCV Docs,我不确定为什么会收到此错误.有谁知道为什么?
import numpy as np
import cv2
import matplotlib.pyplot as plt
img1 = cv2.imread('box.png',0) # queryImage
img2 = cv2.imread('box_in_scene.png',0) # trainImage
# Initiate SIFT detector
orb = cv2.ORB()
# find the keypoints and descriptors with SIFT
kp1, des1 = orb.detectAndCompute(img1,None)
kp2, des2 = orb.detectAndCompute(img2,None)
# create BFMatcher object
bf = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=True)
# Match descriptors.
matches = bf.match(des1,des2)
# Draw first 10 matches.
img3 = cv2.drawMatches(img1,kp1,img2,kp2,matches[:10], flags=2)
plt.imshow(img3),plt.show()
Run Code Online (Sandbox Code Playgroud)
错误:
Traceback (most recent call last):
File "match.py", line 22, in <module>
img3 = cv2.drawMatches(img1,kp1,img2,kp2,matches[:10], flags=2)
AttributeError: 'module' object has no attribute 'drawMatches'
Run Code Online (Sandbox Code Playgroud)
ray*_*ica 78
我也参加了派对,但我为Mac OS X安装了OpenCV 2.4.9,并且drawMatches我的发行版中没有该功能.我也试过了第二种方法find_obj,但对我来说也没用.有了这个,我决定编写自己的实现,drawMatches尽可能地模仿我的能力,这就是我所创造的.
我提供了自己的图像,其中一个是相机人,另一个是相同的图像,但逆时针旋转了55度.
我写的基础是我分配一个输出RGB图像,其中行数是两个图像的最大值,以适应将两个图像放在输出图像中,而列只是两个列的总和.请注意,我假设两张图像都是灰度图像.
我将每个图像放在相应的位置,然后运行所有匹配关键点的循环.我提取两个图像之间匹配的关键点,然后提取它们的(x,y)坐标.我在每个检测到的位置绘制圆圈,然后绘制一条连接这些圆圈的线.
请记住,第二个图像中检测到的关键点是相对于自己的坐标系.如果要将其放在最终输出图像中,则需要将列坐标偏移第一个图像的列数,以使列坐标相对于输出图像的坐标系.
无需再费周折:
import numpy as np
import cv2
def drawMatches(img1, kp1, img2, kp2, matches):
"""
My own implementation of cv2.drawMatches as OpenCV 2.4.9
does not have this function available but it's supported in
OpenCV 3.0.0
This function takes in two images with their associated
keypoints, as well as a list of DMatch data structure (matches)
that contains which keypoints matched in which images.
An image will be produced where a montage is shown with
the first image followed by the second image beside it.
Keypoints are delineated with circles, while lines are connected
between matching keypoints.
img1,img2 - Grayscale images
kp1,kp2 - Detected list of keypoints through any of the OpenCV keypoint
detection algorithms
matches - A list of matches of corresponding keypoints through any
OpenCV keypoint matching algorithm
"""
# Create a new output image that concatenates the two images together
# (a.k.a) a montage
rows1 = img1.shape[0]
cols1 = img1.shape[1]
rows2 = img2.shape[0]
cols2 = img2.shape[1]
# Create the output image
# The rows of the output are the largest between the two images
# and the columns are simply the sum of the two together
# The intent is to make this a colour image, so make this 3 channels
out = np.zeros((max([rows1,rows2]),cols1+cols2,3), dtype='uint8')
# Place the first image to the left
out[:rows1,:cols1] = np.dstack([img1, img1, img1])
# Place the next image to the right of it
out[:rows2,cols1:] = np.dstack([img2, img2, img2])
# For each pair of points we have between both images
# draw circles, then connect a line between them
for mat in matches:
# Get the matching keypoints for each of the images
img1_idx = mat.queryIdx
img2_idx = mat.trainIdx
# x - columns
# y - rows
(x1,y1) = kp1[img1_idx].pt
(x2,y2) = kp2[img2_idx].pt
# Draw a small circle at both co-ordinates
# radius 4
# colour blue
# thickness = 1
cv2.circle(out, (int(x1),int(y1)), 4, (255, 0, 0), 1)
cv2.circle(out, (int(x2)+cols1,int(y2)), 4, (255, 0, 0), 1)
# Draw a line in between the two points
# thickness = 1
# colour blue
cv2.line(out, (int(x1),int(y1)), (int(x2)+cols1,int(y2)), (255,0,0), 1)
# Show the image
cv2.imshow('Matched Features', out)
cv2.waitKey(0)
cv2.destroyWindow('Matched Features')
# Also return the image if you'd like a copy
return out
Run Code Online (Sandbox Code Playgroud)
为了说明这是有效的,这里是我使用的两个图像:


我使用OpenCV的ORB检测器来检测关键点,并使用归一化的汉明距离作为相似性的距离度量,因为这是二进制描述符.因此:
import numpy as np
import cv2
img1 = cv2.imread('cameraman.png', 0) # Original image - ensure grayscale
img2 = cv2.imread('cameraman_rot55.png', 0) # Rotated image - ensure grayscale
# Create ORB detector with 1000 keypoints with a scaling pyramid factor
# of 1.2
orb = cv2.ORB(1000, 1.2)
# Detect keypoints of original image
(kp1,des1) = orb.detectAndCompute(img1, None)
# Detect keypoints of rotated image
(kp2,des2) = orb.detectAndCompute(img2, None)
# Create matcher
bf = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=True)
# Do matching
matches = bf.match(des1,des2)
# Sort the matches based on distance. Least distance
# is better
matches = sorted(matches, key=lambda val: val.distance)
# Show only the top 10 matches - also save a copy for use later
out = drawMatches(img1, kp1, img2, kp2, matches[:10])
Run Code Online (Sandbox Code Playgroud)
这是我得到的图像:

knnMatchfrom一起使用cv2.BFMatcher我想记下上述代码只有在您认为匹配出现在1D列表中时才有效.但是,如果您决定使用该knnMatch方法cv2.BFMatcher,则返回的是列表列表.具体地,给出的描述符img1称为des1与各描述符中img2称为des2,列表中的每个元素中返回从knnMatch是另一个列表k从匹配des2其为最接近于在每个描述符des1.因此,输出中的第一个元素knnMatch是k匹配列表,des2其中最接近第一个描述符des1.输出中的第二个元素knnMatch是k匹配列表,des2其中最接近找到的第二个描述符des1,依此类推.
为了最有意义knnMatch,您必须限制要匹配的邻居总数k=2.原因是因为你想使用至少两个匹配的点来验证匹配的质量,如果质量足够好,你会想要用这些来绘制你的匹配并在屏幕上显示它们.你可以使用一个非常简单的比率测试(功劳归于David Lowe),以确保从第一个匹配点des2到描述符的des1距离与第二个匹配点相比有一段距离des2.因此,要将返回的knnMatch内容转换为上面编写的代码所需的内容,请迭代匹配,使用上面的比率测试并检查它是否通过.如果是,请将第一个匹配的关键点添加到新列表中.
假设您创建了所有变量,就像在声明BFMatcher实例之前所做的那样,您现在要做的是调整使用knnMatch方法drawMatches:
# Create matcher
bf = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=True)
# Perform KNN matching
matches = bf.knnMatch(des1, des2, k=2)
# Apply ratio test
good = []
for m,n in matches:
if m.distance < 0.75*n.distance:
# Add first matched keypoint to list
# if ratio test passes
good.append(m)
# Or do a list comprehension
#good = [m for (m,n) in matches if m.distance < 0.75*n.distance]
# Now perform drawMatches
out = drawMatches(img1, kp1, img2, kp2, good)
Run Code Online (Sandbox Code Playgroud)
我想将上述修改归因于用户@ryanmeasel,并且发现这些修改的答案在他的帖子中:OpenCV Python:没有drawMatchesknn函数.
Mai*_*mon 18
该drawMatches函数不是Python接口的一部分.
正如您在文档中看到的那样,它仅C++在目前定义.
摘自文档:
C++: void drawMatches(const Mat& img1, const vector<KeyPoint>& keypoints1, const Mat& img2, const vector<KeyPoint>& keypoints2, const vector<DMatch>& matches1to2, Mat& outImg, const Scalar& matchColor=Scalar::all(-1), const Scalar& singlePointColor=Scalar::all(-1), const vector<char>& matchesMask=vector<char>(), int flags=DrawMatchesFlags::DEFAULT )
C++: void drawMatches(const Mat& img1, const vector<KeyPoint>& keypoints1, const Mat& img2, const vector<KeyPoint>& keypoints2, const vector<vector<DMatch>>& matches1to2, Mat& outImg, const Scalar& matchColor=Scalar::all(-1), const Scalar& singlePointColor=Scalar::all(-1), const vector<vector<char>>& matchesMask=vector<vector<char> >(), int flags=DrawMatchesFlags::DEFAULT )
Run Code Online (Sandbox Code Playgroud)
如果函数有Python接口,你会发现这样的东西:
Python: cv2.drawMatches(img1, keypoints1, [...])
Run Code Online (Sandbox Code Playgroud)
编辑
实际上有一个提交在5个月前引入了这个功能.但是,它还没有(还)在官方文档中.
确保您使用的是最新的OpenCV版本(2.4.7).为了完整起见,OpenCV 3.0.0的Functions接口如下所示:
cv2.drawMatches(img1, keypoints1, img2, keypoints2, matches1to2[, outImg[, matchColor[, singlePointColor[, matchesMask[, flags]]]]]) ? outImg
Run Code Online (Sandbox Code Playgroud)
Phi*_*mee 16
我知道这个问题有一个接受的答案是正确的,但是如果你使用OpenCV 2.4.8而不是3.0(-dev),一个解决方法可能是使用包含的样本中的一些函数 opencv\sources\samples\python2\find_obj
import cv2
from find_obj import filter_matches,explore_match
img1 = cv2.imread('../c/box.png',0) # queryImage
img2 = cv2.imread('../c/box_in_scene.png',0) # trainImage
# Initiate SIFT detector
orb = cv2.ORB()
# find the keypoints and descriptors with SIFT
kp1, des1 = orb.detectAndCompute(img1,None)
kp2, des2 = orb.detectAndCompute(img2,None)
# create BFMatcher object
bf = cv2.BFMatcher(cv2.NORM_HAMMING)#, crossCheck=True)
matches = bf.knnMatch(des1, trainDescriptors = des2, k = 2)
p1, p2, kp_pairs = filter_matches(kp1, kp2, matches)
explore_match('find_obj', img1,img2,kp_pairs)#cv2 shows image
cv2.waitKey()
cv2.destroyAllWindows()
Run Code Online (Sandbox Code Playgroud)
这是输出图像:
