joh*_*i07 3 python camera opencv
我现在正在努力完成一项特定的计算机视觉任务。例如,假设我们有一个道路的相机框架。现在我想用水平平移的假想相机生成一个新帧。此外,还添加了一个微小的摄像角度。为了说明这一点,我上传了一张演示图片:
如何在 python 中从原始框架创建新框架?对于我的其他计算机视觉任务,我已经在使用 OpenCV。
我也为此苦苦挣扎了一段时间,直到我看到这篇分享一些示例代码的有用帖子。我理论上理解,如果你有单应矩阵,你可以使用 OpenCV 的 warpPerspective 函数获得新的框架。由于您拥有精确的平移和旋转值,因此您可以根据相机的内在参数自行推导矩阵。然而,直到我自己尝试了代码,我才完全明白它是如何完成的。
\n\n我们知道,对于空间中的 3D 点到 2D 图像的投影,单应性矩阵由下式给出
\n\nH = K[R|T]
\n\n要将点从一个 2D 图像转换为另一个图像,您只需先将点反向投影到 3D,然后将它们重新投影到新的图像平面。
\n\nx\xe2\x80\x99 = K * [R2|T2] * [R1|T1](inv) * K(inv) * x
\n\n[R2|T2] * [R1|T1](inv) 相当于单个变换矩阵,它给出从一个相机位姿到另一个相机位姿的相对变换。所有矩阵通过在需要的地方附加 [0, 0, 0, 1] 形成 4x4。
\n\n这是一些示例代码,它们改编自同一篇文章的代码。
\n\nimport cv2\nimport numpy as np\n\nf = 500\nrotXval = 90\nrotYval = 90\nrotZval = 90\ndistXval = 500\ndistYval = 500\ndistZval = 500\n\ndef onFchange(val):\n global f\n f = val\ndef onRotXChange(val):\n global rotXval\n rotXval = val\ndef onRotYChange(val):\n global rotYval\n rotYval = val\ndef onRotZChange(val):\n global rotZval\n rotZval = val\ndef onDistXChange(val):\n global distXval\n distXval = val\ndef onDistYChange(val):\n global distYval\n distYval = val\ndef onDistZChange(val):\n global distZval\n distZval = val\n\nif __name__ == \'__main__\':\n\n #Read input image, and create output image\n src = cv2.imread(\'test.jpg\')\n src = cv2.resize(src,(640,480))\n dst = np.zeros_like(src)\n h, w = src.shape[:2]\n\n #Create user interface with trackbars that will allow to modify the parameters of the transformation\n wndname1 = "Source:"\n wndname2 = "WarpPerspective: "\n cv2.namedWindow(wndname1, 1)\n cv2.namedWindow(wndname2, 1)\n cv2.createTrackbar("f", wndname2, f, 1000, onFchange)\n cv2.createTrackbar("Rotation X", wndname2, rotXval, 180, onRotXChange)\n cv2.createTrackbar("Rotation Y", wndname2, rotYval, 180, onRotYChange)\n cv2.createTrackbar("Rotation Z", wndname2, rotZval, 180, onRotZChange)\n cv2.createTrackbar("Distance X", wndname2, distXval, 1000, onDistXChange)\n cv2.createTrackbar("Distance Y", wndname2, distYval, 1000, onDistYChange)\n cv2.createTrackbar("Distance Z", wndname2, distZval, 1000, onDistZChange)\n\n #Show original image\n cv2.imshow(wndname1, src)\n\n k = -1\n while k != 27:\n\n if f <= 0: f = 1\n rotX = (rotXval - 90)*np.pi/180\n rotY = (rotYval - 90)*np.pi/180\n rotZ = (rotZval - 90)*np.pi/180\n distX = distXval - 500\n distY = distYval - 500\n distZ = distZval - 500\n\n # Camera intrinsic matrix\n K = np.array([[f, 0, w/2, 0],\n [0, f, h/2, 0],\n [0, 0, 1, 0]])\n\n # K inverse\n Kinv = np.zeros((4,3))\n Kinv[:3,:3] = np.linalg.inv(K[:3,:3])*f\n Kinv[-1,:] = [0, 0, 1]\n\n # Rotation matrices around the X,Y,Z axis\n RX = np.array([[1, 0, 0, 0],\n [0,np.cos(rotX),-np.sin(rotX), 0],\n [0,np.sin(rotX),np.cos(rotX) , 0],\n [0, 0, 0, 1]])\n\n RY = np.array([[ np.cos(rotY), 0, np.sin(rotY), 0],\n [ 0, 1, 0, 0],\n [ -np.sin(rotY), 0, np.cos(rotY), 0],\n [ 0, 0, 0, 1]])\n\n RZ = np.array([[ np.cos(rotZ), -np.sin(rotZ), 0, 0],\n [ np.sin(rotZ), np.cos(rotZ), 0, 0],\n [ 0, 0, 1, 0],\n [ 0, 0, 0, 1]])\n\n # Composed rotation matrix with (RX,RY,RZ)\n R = np.linalg.multi_dot([ RX , RY , RZ ])\n\n # Translation matrix\n T = np.array([[1,0,0,distX],\n [0,1,0,distY],\n [0,0,1,distZ],\n [0,0,0,1]])\n\n # Overall homography matrix\n H = np.linalg.multi_dot([K, R, T, Kinv])\n\n # Apply matrix transformation\n cv2.warpPerspective(src, H, (w, h), dst, cv2.INTER_NEAREST, cv2.BORDER_CONSTANT, 0)\n\n # Show the image\n cv2.imshow(wndname2, dst)\n k = cv2.waitKey(1)\nRun Code Online (Sandbox Code Playgroud)\n
| 归档时间: |
|
| 查看次数: |
7555 次 |
| 最近记录: |