使用相机内在函数/外在函数重新扭曲点

And*_*ndy 10 opencv points distortion camera-calibration

给定一组2D点,我该如何应用相反的undistortPoints

我有相机内在的,distCoeffs并且想(例如)创建一个正方形,并扭曲它,好像相机已通过镜头观看它.

我在这里找到了一个'扭曲'补丁:http://code.opencv.org/issues/1387但它似乎只对图像有用,我想在稀疏点上工作.

mor*_*paj 7

这个问题相当陈旧,但由于我从谷歌搜索到这里没有看到一个简洁的答案,我决定回答它.

有一个叫做的函数projectPoints正是这样做的.C版本与类似功能的估计照相机参数时由OpenCV的内部使用calibrateCamerastereoCalibrate

编辑:
要使用2D点作为输入,我们可以将所有z坐标设置为1 convertPointsToHomogeneous并使用projectPoints无旋转和无翻译.

cv::Mat points2d = ...;
cv::Mat points3d;
cv::Mat distorted_points2d;
convertPointsToHomogeneous(points2d, points3d);
projectPoints(points3d, cv::Vec3f(0,0,0), cv::Vec3f(0,0,0), camera_matrix, dist_coeffs, distorted_points2d);
Run Code Online (Sandbox Code Playgroud)

  • 实际上,`projectPoints`通过考虑到校准将3D点投影到2D点上,您不会从未变形的2D点中得到变形的点。 (2认同)

Chr*_*ger 5

一个简单的解决方案是使用initUndistortRectifyMap从未失真的坐标到失真的坐标获取地图:

cv::Mat K = ...; // 3x3 intrinsic parameters
cv::Mat D = ...; // 4x1 or similar distortion parameters
int W = 640; // image width
int H = 480; // image height

cv::Mat mapx, mapy;
cv::initUndistortRectifyMap(K, D, cv::Mat(), K, cv::Size(W, H), 
  CV_32F, mapx, mapy);

float distorted_x = mapx.at<float>(y, x);
float distorted_y = mapy.at<float>(y, x);
Run Code Online (Sandbox Code Playgroud)

我编辑澄清代码是正确的:

我引用了以下文档initUndistortRectifyMap:

对于目标(校正和校正)图像中的每个像素(u,v),该函数计算源图像中的相应坐标(即,来自摄像机的原始图像).

map_x(u,v)= x''f_x + c_x

map_y(u,v)= y''f_y + c_y


Pas*_* T. 2

我也有完全相同的需求。这是一个可能的解决方案:

void MyDistortPoints(const std::vector<cv::Point2d> & src, std::vector<cv::Point2d> & dst, 
                     const cv::Mat & cameraMatrix, const cv::Mat & distorsionMatrix)
{
  dst.clear();
  double fx = cameraMatrix.at<double>(0,0);
  double fy = cameraMatrix.at<double>(1,1);
  double ux = cameraMatrix.at<double>(0,2);
  double uy = cameraMatrix.at<double>(1,2);

  double k1 = distorsionMatrix.at<double>(0, 0);
  double k2 = distorsionMatrix.at<double>(0, 1);
  double p1 = distorsionMatrix.at<double>(0, 2);
  double p2 = distorsionMatrix.at<double>(0, 3);
  double k3 = distorsionMatrix.at<double>(0, 4);
  //BOOST_FOREACH(const cv::Point2d &p, src)
  for (unsigned int i = 0; i < src.size(); i++)
  {
    const cv::Point2d &p = src[i];
    double x = p.x;
    double y = p.y;
    double xCorrected, yCorrected;
    //Step 1 : correct distorsion
    {     
      double r2 = x*x + y*y;
      //radial distorsion
      xCorrected = x * (1. + k1 * r2 + k2 * r2 * r2 + k3 * r2 * r2 * r2);
      yCorrected = y * (1. + k1 * r2 + k2 * r2 * r2 + k3 * r2 * r2 * r2);

      //tangential distorsion
      //The "Learning OpenCV" book is wrong here !!!
      //False equations from the "Learning OpenCv" book
      //xCorrected = xCorrected + (2. * p1 * y + p2 * (r2 + 2. * x * x)); 
      //yCorrected = yCorrected + (p1 * (r2 + 2. * y * y) + 2. * p2 * x);
      //Correct formulae found at : http://www.vision.caltech.edu/bouguetj/calib_doc/htmls/parameters.html
      xCorrected = xCorrected + (2. * p1 * x * y + p2 * (r2 + 2. * x * x));
      yCorrected = yCorrected + (p1 * (r2 + 2. * y * y) + 2. * p2 * x * y);
    }
    //Step 2 : ideal coordinates => actual coordinates
    {
      xCorrected = xCorrected * fx + ux;
      yCorrected = yCorrected * fy + uy;
    }
    dst.push_back(cv::Point2d(xCorrected, yCorrected));
  }


}

void MyDistortPoints(const std::vector<cv::Point2d> & src, std::vector<cv::Point2d> & dst, 
                     const cv::Matx33d & cameraMatrix, const cv::Matx<double, 1, 5> & distorsionMatrix)
{
  cv::Mat cameraMatrix2(cameraMatrix);
  cv::Mat distorsionMatrix2(distorsionMatrix);
  return MyDistortPoints(src, dst, cameraMatrix2, distorsionMatrix2);
}

void TestDistort()
{
  cv::Matx33d cameraMatrix = 0.;
  {
    //cameraMatrix Init
    double fx = 1000., fy = 950.;
    double ux = 324., uy = 249.;
    cameraMatrix(0, 0) = fx;
    cameraMatrix(1, 1) = fy;
    cameraMatrix(0, 2) = ux;
    cameraMatrix(1, 2) = uy;
    cameraMatrix(2, 2) = 1.;
  }


  cv::Matx<double, 1, 5> distorsionMatrix;
  {
    //distorsion Init
    const double k1 = 0.5, k2 = -0.5, k3 = 0.000005, p1 = 0.07, p2 = -0.05;

    distorsionMatrix(0, 0) = k1;
    distorsionMatrix(0, 1) = k2;
    distorsionMatrix(0, 2) = p1;
    distorsionMatrix(0, 3) = p2;
    distorsionMatrix(0, 4) = k3;
  }


  std::vector<cv::Point2d> distortedPoints;
  std::vector<cv::Point2d> undistortedPoints;
  std::vector<cv::Point2d> redistortedPoints;
  distortedPoints.push_back(cv::Point2d(324., 249.));// equals to optical center
  distortedPoints.push_back(cv::Point2d(340., 200));
  distortedPoints.push_back(cv::Point2d(785., 345.));
  distortedPoints.push_back(cv::Point2d(0., 0.));
  cv::undistortPoints(distortedPoints, undistortedPoints, cameraMatrix, distorsionMatrix);  
  MyDistortPoints(undistortedPoints, redistortedPoints, cameraMatrix, distorsionMatrix);
  cv::undistortPoints(redistortedPoints, undistortedPoints, cameraMatrix, distorsionMatrix);  

  //Poor man's unit test ensuring we have an accuracy that is better than 0.001 pixel
  for (unsigned int i = 0; i < undistortedPoints.size(); i++)
  {
    cv::Point2d dist = redistortedPoints[i] - distortedPoints[i];
    double norm = sqrt(dist.dot(dist));
    std::cout << "norm = " << norm << std::endl;
    assert(norm < 1E-3);
  }
}
Run Code Online (Sandbox Code Playgroud)