将黑白图像完全转换为一组线(也称为仅使用线进行矢量化)

Thi*_*ser 5 python opencv image-processing computer-vision python-3.x

我有许多黑白图像,想将它们转换为一组线,这样我就可以从这些线完全或至少接近完全重建原始图像。换句话说,我正在尝试将图像矢量化为一组线。

我已经看过HoughLinesTransform,但是它并没有覆盖图像的每个部分,而是更多关于在图像中查找线条,而不是将图像完全转换为线条表示。另外,线变换不对线的实际宽度进行编码,这让我猜测如何重新构造图像(我需要这样做,因为这是训练机器学习算法的前一步)。

到目前为止,我已经使用houghLineTransform尝试了以下代码:

import numpy as np
import cv2

MetersPerPixel=0.1

def loadImageGray(path):
    img=(cv2.imread(path,0))
    return img

def LineTransform(img):
    edges = cv2.Canny(img,50,150,apertureSize = 3)
    minLineLength = 10
    maxLineGap = 20
    lines = cv2.HoughLines(edges,1,np.pi/180,100,minLineLength,maxLineGap)
    return lines;

def saveLines(liness):
    img=np.zeros((2000,2000,3), np.uint8)
    for lines in liness:
        for x1,y1,x2,y2 in lines:
            print(x1,y1,x2,y2)
            img=cv2.line(img,(x1,y1),(x2,y2),(0,255,0),3)
    cv2.imwrite('houghlines5.jpg',img)

def main():
    img=loadImageGray("loadtest.png")
    lines=LineTransform(img)
    saveLines(lines)

main()
Run Code Online (Sandbox Code Playgroud)

但是,当使用以下方法进行测试时 图片

我得到了这张图片: 输出

如您所见,它丢失了未与轴对齐的线,并且如果您仔细观察,即使检测到的线也被分割成2条线,并且它们之间有一定间隔。我还必须以预设的宽度绘制这些图像,而实际宽度未知。

编辑:根据@MarkSetchell的建议,我通过使用以下代码尝试了pypotrace,当前它在很大程度上忽略了贝塞尔曲线,只是试图像它们是直线一样工作,稍后我将重点关注该问题,但是现在结果不正确。 t最佳:

def TraceLines(img):
    bmp = potrace.Bitmap(bitmap(img))
    path=bmp.trace()
    lines=[]
    i=0
    for curve in path:
        for segment in curve:
            print(repr(segment))
            if segment.is_corner:
                c_x, c_y = segment.c
                c2_x ,c2_y= segment.end_point
                            lines.append([[int(c_x), int(c_y),int(c2_x) ,int(c2_y)]])

            else:
                c_x, c_y = segment.c1
                c2_x ,c2_y= segment.end_point
            i=i+1
    return lines
Run Code Online (Sandbox Code Playgroud)

这导致这张图片 图片,这是一种改进,但是,虽然可以在以后解决圆的问题,但正方形的缺失部分和其他直线上的怪异伪像更成问题。有人知道如何解决它们吗?关于如何获得线宽的任何提示?

有人对如何更好地解决此问题有任何建议吗?

编辑编辑:这是另一个测试图像: 可变壁宽,其中包含我要捕获的多个线宽。

Jon*_*tra 7

OpenCV的

使用OpenCV findContoursdrawContours可以首先矢量化线条,然后精确地重新创建原始图像:

import numpy as np

import cv2

img = cv2.imread('loadtest.png', 0)

result_fill = np.ones(img.shape, np.uint8) * 255
result_borders = np.zeros(img.shape, np.uint8)

# the '[:-1]' is used to skip the contour at the outer border of the image
contours = cv2.findContours(img, cv2.RETR_LIST,
                            cv2.CHAIN_APPROX_SIMPLE)[0][:-1]

# fill spaces between contours by setting thickness to -1
cv2.drawContours(result_fill, contours, -1, 0, -1)
cv2.drawContours(result_borders, contours, -1, 255, 1)

# xor the filled result and the borders to recreate the original image
result = result_fill ^ result_borders

# prints True: the result is now exactly the same as the original
print(np.array_equal(result, img))

cv2.imwrite('contours.png', result)
Run Code Online (Sandbox Code Playgroud)

结果

在此处输入图片说明

Scikit图片

使用scikit-image的,find_contoursapproximate_polygon允许您通过近似多边形来减少行数(基于此示例):

import numpy as np
from skimage.measure import approximate_polygon, find_contours

import cv2

img = cv2.imread('loadtest.png', 0)
contours = find_contours(img, 0)

result_contour = np.zeros(img.shape + (3, ), np.uint8)
result_polygon1 = np.zeros(img.shape + (3, ), np.uint8)
result_polygon2 = np.zeros(img.shape + (3, ), np.uint8)

for contour in contours:
    print('Contour shape:', contour.shape)

    # reduce the number of lines by approximating polygons
    polygon1 = approximate_polygon(contour, tolerance=2.5)
    print('Polygon 1 shape:', polygon1.shape)

    # increase tolerance to further reduce number of lines
    polygon2 = approximate_polygon(contour, tolerance=15)
    print('Polygon 2 shape:', polygon2.shape)

    contour = contour.astype(np.int).tolist()
    polygon1 = polygon1.astype(np.int).tolist()
    polygon2 = polygon2.astype(np.int).tolist()

    # draw contour lines
    for idx, coords in enumerate(contour[:-1]):
        y1, x1, y2, x2 = coords + contour[idx + 1]
        result_contour = cv2.line(result_contour, (x1, y1), (x2, y2),
                                  (0, 255, 0), 1)
    # draw polygon 1 lines
    for idx, coords in enumerate(polygon1[:-1]):
        y1, x1, y2, x2 = coords + polygon1[idx + 1]
        result_polygon1 = cv2.line(result_polygon1, (x1, y1), (x2, y2),
                                   (0, 255, 0), 1)
    # draw polygon 2 lines
    for idx, coords in enumerate(polygon2[:-1]):
        y1, x1, y2, x2 = coords + polygon2[idx + 1]
        result_polygon2 = cv2.line(result_polygon2, (x1, y1), (x2, y2),
                                   (0, 255, 0), 1)

cv2.imwrite('contour_lines.png', result_contour)
cv2.imwrite('polygon1_lines.png', result_polygon1)
cv2.imwrite('polygon2_lines.png', result_polygon2)
Run Code Online (Sandbox Code Playgroud)

结果

Python输出:

Contour shape: (849, 2)
Polygon 1 shape: (28, 2)
Polygon 2 shape: (9, 2)
Contour shape: (825, 2)
Polygon 1 shape: (31, 2)
Polygon 2 shape: (9, 2)
Contour shape: (1457, 2)
Polygon 1 shape: (9, 2)
Polygon 2 shape: (8, 2)
Contour shape: (879, 2)
Polygon 1 shape: (5, 2)
Polygon 2 shape: (5, 2)
Contour shape: (973, 2)
Polygon 1 shape: (5, 2)
Polygon 2 shape: (5, 2)
Contour shape: (224, 2)
Polygon 1 shape: (4, 2)
Polygon 2 shape: (4, 2)
Contour shape: (825, 2)
Polygon 1 shape: (13, 2)
Polygon 2 shape: (13, 2)
Contour shape: (781, 2)
Polygon 1 shape: (13, 2)
Polygon 2 shape: (13, 2)
Run Code Online (Sandbox Code Playgroud)

outline_lines.png:

outline_lines.png

多边形1_lines.png:

Polygon1_lines.png

多边形2_lines.png:

geometric2_lines.png

然后可以通过将毕达哥拉斯定理应用于坐标来计算线的长度:line_length = math.sqrt(abs(x2 - x1)**2 + abs(y2 - y1)**2)。如果要以数值形式获得线的宽度,请查看“如何确定线的宽度?”的答案。一些建议的方法。


Mar*_*ell 5

我对此进行了尝试,但对结果并不完全满意,但是我认为我会分享我的想法和一些代码,欢迎其他任何人继续采用,借鉴,窃取或发展任何想法。

我认为某些问题源于选择Canny作为边缘检测,因为它会导致两个边缘,所以我的第一个攻击计划是用scikit-image的skeletonisaton代替它。这就是edge图像:

在此处输入图片说明

然后,我决定使用HoughLinesP而不是HoughLines,但似乎没有什么用。我尝试增加和减少分辨率参数,但没有帮助。因此,我决定稍微扩张一下(骨架)骨骼,然后似乎开始更好地检测形状了,我明白了:

在此处输入图片说明

我不确定为什么它对线条粗细如此敏感,并且正如我所说,如果其他任何人想要接受它并进行实验,这就是我要处理的代码的地方:

#!/usr/bin/env python3

import numpy as np
import cv2
from skimage.morphology import medial_axis, dilation, disk

def loadImageGray(path):
    img=cv2.imread(path,0)
    return img

def LineTransform(img): 
    # Try skeletonising image rather than Canny edge - only one line instead of both sides of line
    skeleton = (medial_axis(255-img)*255).astype(np.uint8)
    cv2.imwrite('skeleton.png',skeleton)

    # Try dilating skeleton to make it fatter and more detectable
    selem = disk(2)
    fatskel = dilation(skeleton,selem)
    cv2.imwrite('fatskeleton.png',fatskel)

    minLineLength = 10
    maxLineGap = 20
    lines = cv2.HoughLinesP(fatskel,1,np.pi/180,100,minLineLength,maxLineGap)
    return lines

def saveLines(liness):
    img=np.zeros((2000,2000,3), np.uint8)
    for lines in liness:
        for x1,y1,x2,y2 in lines:
            print(x1,y1,x2,y2)
            img=cv2.line(img,(x1,y1),(x2,y2),(0,255,0),3)
    cv2.imwrite('houghlines.png',img)

img=loadImageGray("loadtest.png")
lines=LineTransform(img)
saveLines(lines)
Run Code Online (Sandbox Code Playgroud)

实际上,如果您使用上面的代码而忽略了骨架化和增肥,而仅对HoughLinesP使用原始图像的反面,则结果非常相似:

def LineTransform(img): 
    minLineLength = 10
    maxLineGap = 20
    lines = cv2.HoughLinesP(255-img,1,np.pi/180,100,minLineLength,maxLineGap)
    return lines
Run Code Online (Sandbox Code Playgroud)