use*_*136 1 python python-imaging-library
我正在尝试编写一个Python程序,它接受输入图像(例如JPEG)并生成“地球仪组件”输出图像,类似于le Paper Globe。本质上,如果输出图像被打印、剪切、折叠和粘合,人们应该获得投影到粗糙球体上的原始图像。
该程序会将输入图像划分为 32 个(8 个水平,4 个垂直)矩形,然后将每个矩形映射到一些精心选择的梯形上,或者更一般地说,映射到四边形上。我找到了一种将四边形映射到正方形的Pillow/PIL 方法,但找不到将矩形映射到四边形的方法。
有谁知道如何在Python中将输入图像的矩形映射到输出图像的四边形上?我更喜欢 Pillow/PIL,但任何可以打开和保存 JPEG 的库都可以。
基本上,您需要一些透视变换才能实现这一点。枕头就是Image.transform为此而设的。您需要事先计算所有必要的参数,即单应变换,参见。这个问答。我个人会使用 OpenCV 的warpPerspective,并通过使用 来获取变换矩阵getPerspectiveTransform,这样您只需要提供源图像中的四个点和目标图像中的四个点。其他问答对此有一个良好的快速开始。
在我们详细介绍之前,我只是想确定以下是您想要实现的目标:
\n\n所以,完整的算法是:
\n而且,这是完整的代码,包括一些可视化:
\nimport cv2\nimport numpy as np\nfrom PIL import Image, ImageDraw\n\n# Input image to get rectangle (region of interest, roi) from\nimage = Image.open(\'path/to/your/image.png\')\nroi = ((100, 30), (300, 200))\n\n# Dummy output image with some quad to paste to\noutput = Image.new(\'RGB\', (600, 800), (255, 255, 255))\ndraw = ImageDraw.Draw(output)\ndraw.polygon(((100, 20), (40, 740), (540, 350), (430, 70)), outline=(0, 0, 0))\n\n# Convert images to NumPy arrays for processing in OpenCV\nimage_cv2 = np.array(image)\noutput_cv2 = np.array(output)\n\n# Source points, i.e. roi in input image\ntl = (roi[0][0], roi[0][1])\ntr = (roi[1][0], roi[0][1])\nbr = (roi[1][0], roi[1][1])\nbl = (roi[0][0], roi[1][1])\npts = np.array([bl, br, tr, tl])\n\n# Find (or know) target points in output image w.r.t. the quad\n# Attention: The order must be the same as defined by the roi points!\ntl_dst = (100, 20)\ntr_dst = (430, 70)\nbr_dst = (540, 350)\nbl_dst = (40, 740)\ndst_pts = np.array([bl_dst, br_dst, tr_dst, tl_dst])\n\n# Get transformation matrix, and warp image\npts = np.float32(pts.tolist())\ndst_pts = np.float32(dst_pts.tolist())\nM = cv2.getPerspectiveTransform(pts, dst_pts)\nimage_size = (output_cv2.shape[1], output_cv2.shape[0])\nwarped = cv2.warpPerspective(image_cv2, M, dsize=image_size)\n\n# Get mask from quad in output image, and copy content from warped image\ngray = cv2.cvtColor(output_cv2, cv2.COLOR_BGR2GRAY)\ngray = cv2.threshold(gray, 128, 255, cv2.THRESH_BINARY_INV)[1]\ncnts = cv2.findContours(gray, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)\ncnts = cnts[0] if len(cnts) == 2 else cnts[1]\nmask = np.zeros_like(output_cv2)\nmask = cv2.drawContours(mask, cnts, 0, (255, 255, 255), cv2.FILLED)\nmask = mask.all(axis=2)\noutput_cv2[mask, :] = warped[mask, :]\n\n# Transform back to PIL images\noutput_new = Image.fromarray(output_cv2)\noutput_new.save(\'final_output.jpg\')\n\n# Just for visualization\nimport matplotlib.pyplot as plt\ndraw = ImageDraw.Draw(image)\ndraw.rectangle(roi, outline=(255, 0, 0), width=3)\nplt.figure(0, figsize=(18, 9))\nplt.subplot(1, 3, 1), plt.imshow(image), plt.title(\'Input with ROI\')\nplt.subplot(1, 3, 2), plt.imshow(output), plt.title(\'Output with quad\')\nplt.subplot(1, 3, 3), plt.imshow(output_new), plt.title(\'Final output\')\nplt.tight_layout(), plt.show()\nRun Code Online (Sandbox Code Playgroud)\n在步骤#4,自动查找目的地点,您可以执行以下操作:
\n# Find target points in output image w.r.t. the quad\ngray = cv2.cvtColor(output_cv2, cv2.COLOR_BGR2GRAY)\ngray = cv2.threshold(gray, 128, 255, cv2.THRESH_BINARY_INV)[1]\ncnts = cv2.findContours(gray, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)\ncnts = cnts[0] if len(cnts) == 2 else cnts[1]\napprox = cv2.approxPolyDP(cnts[0], 0.03 * cv2.arcLength(cnts[0], True), True)\nRun Code Online (Sandbox Code Playgroud)\n这基本上是找到图像中的轮廓,并近似角点。您仍然需要找到结果点的正确顺序......
\n----------------------------------------\nSystem information\n----------------------------------------\nPlatform: Windows-10-10.0.16299-SP0\nPython: 3.8.5\nMatplotlib: 3.3.3\nNumPy: 1.19.5\nOpenCV: 4.5.1\nPillow: 8.1.0\n----------------------------------------\nRun Code Online (Sandbox Code Playgroud)\n