使用FlannBasedMatcherOpenCV,使用相同的参数调用匹配器会得到不同的结果。有人可以建议我做错了吗?
下面的代码显示了我遇到的问题的一个最小示例-简化了代表我的用法FlannBasedMatcher-这不是真正的代码:)
每次循环输出的结果应相同,但不相同。
int const k = std::min(query_descriptors.rows,
std::min(train_descriptors.rows, 2));
cv::Mat query_descriptors_original = query_descriptors.clone();
cv::Mat train_descriptors_original = train_descriptors.clone();
for (int loop=0; loop<2; ++loop)
{
cv::FlannBasedMatcher matcher;
matcher.add(std::vector<cv::Mat>(1, train_descriptors));
std::vector<matches_t> knnMatches;
matcher.knnMatch(query_descriptors, knnMatches, k);
matches.clear();
for (auto const &knn : knnMatches)
{
matches.push_back(knn[0]);
std::cout << knn[0].queryIdx << ',' << knn[0].trainIdx << '\n';
}
std::cout << '\n';
assert(cv::countNonZero(query_descriptors != query_descriptors_original) == 0);
assert(cv::countNonZero(train_descriptors != train_descriptors_original) == 0);
}
}
Run Code Online (Sandbox Code Playgroud)
虽然我认为输出不会有帮助(?),但输出是
0,27
1,170
2,100
3,100
4,123
5,100
6,191
7,71 …Run Code Online (Sandbox Code Playgroud) 我有一张全景图像,以及在该全景图像中看到的较小的建筑物图像.我想要做的是识别那个较小图像中的建筑物是否在该全景图像中,以及这两个图像是如何对齐的.
对于第一个示例,我使用的是全景图像的裁剪版本,因此像素是相同的.
import cv2
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
import math
# Load images
cwImage = cv2.imread('cw1.jpg',0)
panImage = cv2.imread('pan1.jpg',0)
# Prepare for SURF image analysis
surf = cv2.xfeatures2d.SURF_create(4000)
# Find keypoints and point descriptors for both images
cwKeypoints, cwDescriptors = surf.detectAndCompute(cwImage, None)
panKeypoints, panDescriptors = surf.detectAndCompute(panImage, None)
Run Code Online (Sandbox Code Playgroud)
然后我使用OpenCV的FlannBasedMatcher来查找两个图像之间的良好匹配:
FLANN_INDEX_KDTREE = 0
index_params = dict(algorithm=FLANN_INDEX_KDTREE, trees=5)
search_params = dict(checks=50)
flann = cv2.FlannBasedMatcher(index_params, search_params)
# Find matches between the descriptors
matches = flann.knnMatch(cwDescriptors, panDescriptors, …Run Code Online (Sandbox Code Playgroud)