从光栅图像中提取街道网络

dan*_*nvk 5 python opencv hough-transform scikit-image

我有一张 512x512 的街道网格图像:

街道网格

我想提取该图像中每条街道的折线(大蓝点 = 交叉点,小蓝点 = 沿折线的点):

带有折线的街道网格

我尝试了一些技巧!一个想法是首先skeletonize将街道压缩至 1 像素宽的线条:

from skimage import morphology
morphology.skeletonize(streets_data))
Run Code Online (Sandbox Code Playgroud)

骨架化的街道

不幸的是,这存在一些间隙,破坏了街道网络的连通性;我不完全确定为什么,但我猜测这是因为有些街道在某些地方窄 1 像素,而在其他地方宽 1 像素。(更新:这些间隙不是真实的;它们完全是我展示骨架的方式的产物。有关悲伤的故事,请参阅此评论。骨架连接良好。)

我可以使用 , 来修补这些binary_dilation,但代价是使街道的宽度再次有所变化:

out = morphology.skeletonize(streets_data)
out = morphology.binary_dilation(out, morphology.selem.disk(1))
Run Code Online (Sandbox Code Playgroud)

重新连接的街道

通过重新连接的网格,我可以运行霍夫变换来查找线段:

import cv2
rho = 1  # distance resolution in pixels of the Hough grid
theta = np.pi / 180  # angular resolution in radians of the Hough grid
threshold = 8  # minimum number of votes (intersections in Hough grid cell)
min_line_length = 10  # minimum number of pixels making up a line
max_line_gap = 2  # maximum gap in pixels between connectable line segments

# Run Hough on edge detected image
# Output "lines" is an array containing endpoints of detected line segments
lines = cv2.HoughLinesP(
    out, rho, theta, threshold, np.array([]),
    min_line_length, max_line_gap
)

line_image = streets_data.copy()
for i, line in enumerate(lines):
    for x1,y1,x2,y2 in line:
        cv2.line(line_image,(x1,y1),(x2,y2), 2, 1)
Run Code Online (Sandbox Code Playgroud)

这会产生一整堆重叠的线段,以及一些间隙(看右侧的 T 形交叉点):

霍夫的结果

此时,我可以尝试消除重叠线段的重复数据,但我并不清楚这是一条解决方案的路径,特别是考虑到这一差距。

是否有更直接的方法可以获取我正在寻找的折线网络?特别是,有哪些方法可以用于:

  1. 寻找交叉路口(四向交叉路口和丁字路口)。
  2. 将街道全部缩小为 1 像素宽,从而允许可能存在一些可变宽度。
  3. 查找交点之间的折线。

小智 5

如果你想提高你的“骨架化”,你可以尝试以下算法来获得“1-px宽的街道”:

import imageio
import numpy as np
from matplotlib import pyplot as plt
from scipy.ndimage import distance_transform_edt
from skimage.segmentation import watershed

# read image
image_rgb = imageio.imread('1mYBD.png')

# convert to binary
image_bin = np.max(image_rgb, axis=2) > 0

# compute the distance transform (only > 0)
distance = distance_transform_edt(image_bin)

# segment the image into "cells" (i.e. the reciprocal of the network)
cells = watershed(distance)

# compute the image gradients
grad_v = np.pad(cells[1:, :] - cells[:-1, :], ((0, 1), (0, 0)))
grad_h = np.pad(cells[:, 1:] - cells[:, :-1], ((0, 0), (0, 1)))

# given that the cells have a constant value,
# only the edges will have non-zero gradient
edges = (abs(grad_v) > 0) + (abs(grad_h) > 0)

# extract points into (x, y) coordinate pairs
pos_v, pos_h = np.nonzero(edges)

# display points on top of image
plt.imshow(image_bin, cmap='gray_r')
plt.scatter(pos_h, pos_v, 1, np.arange(pos_h.size), cmap='Spectral')
Run Code Online (Sandbox Code Playgroud)

输出

该算法适用于“街区”而不是“街道”,请查看图像cells

细胞