OpenCV iOS - 显示从drawMatches返回的图像

Fab*_*oni 2 opencv computer-vision surf feature-detection flann

我是OpenCV的新手.我正试图在iOS上的OpenCV中使用FLANN/SURF在图像之间绘制功能匹配.我正在关注这个例子:

http://docs.opencv.org/doc/tutorials/features2d/feature_flann_matcher/feature_flann_matcher.html#feature-matching-with-flann

这是我的代码,稍加一些修改(将示例中的代码包装在一个函数中,该函数返回一个UIImage作为结果并从bundle中读取起始图像):

UIImage* SURFRecognition::test()
{
    UIImage *img1 = [UIImage imageNamed:@"wallet"];
    UIImage *img2 = [UIImage imageNamed:@"wallet2"];

    Mat img_1;
    Mat img_2;

    UIImageToMat(img1, img_1);
    UIImageToMat(img2, img_2);

    if( !img_1.data || !img_2.data )
    {
        std::cout<< " --(!) Error reading images " << std::endl;
    }

    //-- Step 1: Detect the keypoints using SURF Detector
    int minHessian = 400;

    SurfFeatureDetector detector( minHessian );

    std::vector<KeyPoint> keypoints_1, keypoints_2;

    detector.detect( img_1, keypoints_1 );
    detector.detect( img_2, keypoints_2 );

    //-- Step 2: Calculate descriptors (feature vectors)
    SurfDescriptorExtractor extractor;

    Mat descriptors_1, descriptors_2;

    extractor.compute( img_1, keypoints_1, descriptors_1 );
    extractor.compute( img_2, keypoints_2, descriptors_2 );

    //-- Step 3: Matching descriptor vectors using FLANN matcher
    FlannBasedMatcher matcher;
    std::vector< DMatch > matches;
    matcher.match( descriptors_1, descriptors_2, matches );

    double max_dist = 0; double min_dist = 100;

    //-- Quick calculation of max and min distances between keypoints
    for( int i = 0; i < descriptors_1.rows; i++ )
    { double dist = matches[i].distance;
        if( dist < min_dist ) min_dist = dist;
        if( dist > max_dist ) max_dist = dist;
    }

    printf("-- Max dist : %f \n", max_dist );
    printf("-- Min dist : %f \n", min_dist );

    //-- Draw only "good" matches (i.e. whose distance is less than 2*min_dist )
    //-- PS.- radiusMatch can also be used here.
    std::vector< DMatch > good_matches;

    for( int i = 0; i < descriptors_1.rows; i++ )
    { if( matches[i].distance <= 2*min_dist )
    { good_matches.push_back( matches[i]); }
    }

    //-- Draw only "good" matches
    Mat img_matches;
    drawMatches( img_1, keypoints_1, img_2, keypoints_2,
                good_matches, img_matches, Scalar::all(-1), Scalar::all(-1),
                vector<char>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS );

    //-- Show detected matches
    //imshow( "Good Matches", img_matches );

    UIImage *imgTemp = MatToUIImage(img_matches);

    for( int i = 0; i < good_matches.size(); i++ )
    {
        printf( "-- Good Match [%d] Keypoint 1: %d  -- Keypoint 2: %d  \n", i, good_matches[i].queryIdx, good_matches[i].trainIdx );
    }

    return imgTemp;
}
Run Code Online (Sandbox Code Playgroud)

结果我上面的函数是:

在此输入图像描述

仅显示连接匹配的行,但不显示原始图像.如果我理解得很好,那么drawMatches函数会返回一个包含图像和相似特征之间连接的cv :: Mat.这是正确的还是我遗失了什么?有人能帮我吗?

Fab*_*oni 8

我自己找到了解决方案.经过大量搜索后,似乎drawMatches需要img1和img2与1到3通道.我用alpha开了一个PNGa,所以这些是4通道图像.这是我的代码审查:

添加

UIImageToMat(img1, img_1);
UIImageToMat(img2, img_2);

cvtColor(img_1, img_1, CV_BGRA2BGR);
cvtColor(img_2, img_2, CV_BGRA2BGR);
Run Code Online (Sandbox Code Playgroud)