使用高度图扭曲图像?

Ble*_*der 7 c++ python opencv image-processing unwarp

我有一个图像的高度图,它告诉我每个像素在Z方向上的偏移.我的目标是仅使用它的高度图来展平扭曲的图像.

我该怎么做呢?我知道相机的位置,如果这有帮助的话.


为此,我在考虑假设每个像素是一个平面上的一个点,然后根据我从高度图得到的Z值垂直平移每个点,并从那个平移(想象你看起来)在上面的点;移动将导致点从你的角度移动).

从那个投射的移位,我可以提取每个像素的X和Y移位,我可以输入cv.Remap().

但我不知道如何使用OpenCV获得点的投影3D偏移,更不用说构建偏移图了.


以下是我正在做的参考图像:

校准图像 扭曲的形象

我知道激光的角度(45度),从校准图像中,我可以很容易地计算出书的高度:

h(x) = sin(theta) * abs(calibration(x) - actual(x))
Run Code Online (Sandbox Code Playgroud)

我为这两行做了这个,并使用这种方法线性插入两条线以生成表面(Python代码.它在循环内):

height_grid[x][y] = heights_top[x] * (cv.GetSize(image)[1] - y) + heights_bottom[x] * y
Run Code Online (Sandbox Code Playgroud)

我希望这有帮助 ;)


现在,这是我必须对图像进行去噪.中间的所有奇怪的东西都将3D坐标投射到相机平面上,给定它的位置(以及相机的位置,旋转等):

class Point:
  def __init__(self, x = 0, y = 0, z = 0):
    self.x = x
    self.y = y
    self.z = z

mapX = cv.CreateMat(cv.GetSize(image)[1], cv.GetSize(image)[0], cv.CV_32FC1)
mapY = cv.CreateMat(cv.GetSize(image)[1], cv.GetSize(image)[0], cv.CV_32FC1)

c = Point(CAMERA_POSITION[0], CAMERA_POSITION[1], CAMERA_POSITION[2])
theta = Point(CAMERA_ROTATION[0], CAMERA_ROTATION[1], CAMERA_ROTATION[2])
d = Point()
e = Point(0, 0, CAMERA_POSITION[2] + SENSOR_OFFSET)

costx = cos(theta.x)
costy = cos(theta.y)
costz = cos(theta.z)

sintx = sin(theta.x)
sinty = sin(theta.y)
sintz = sin(theta.z)


for x in xrange(cv.GetSize(image)[0]):
  for y in xrange(cv.GetSize(image)[1]):

    a = Point(x, y, heights_top[x / 2] * (cv.GetSize(image)[1] - y) + heights_bottom[x / 2] * y)
    b = Point()

    d.x = costy * (sintz * (a.y - c.y) + costz * (a.x - c.x)) - sinty * (a.z - c.z)
    d.y = sintx * (costy * (a.z - c.z) + sinty * (sintz * (a.y - c.y) + costz * (a.x - c.x))) + costx * (costz * (a.y - c.y) - sintz * (a.x - c.x))
    d.z = costx * (costy * (a.z - c.z) + sinty * (sintz * (a.y - c.y) + costz * (a.x - c.x))) - sintx * (costz * (a.y - c.y) - sintz * (a.x - c.x))

    mapX[y, x] = x + (d.x - e.x) * (e.z / d.z)
    mapY[y, x] = y + (d.y - e.y) * (e.z / d.z)


print
print 'Remapping original image using map...'

remapped = cv.CreateImage(cv.GetSize(image), 8, 3)
cv.Remap(image, remapped, mapX, mapY, cv.CV_INTER_LINEAR)
Run Code Online (Sandbox Code Playgroud)

现在变成了一个巨大的图像和代码线程......无论如何,这个代码块需要7分钟才能在1800万像素的摄像机图像上运行; 这种方式太长了,最后,这种方法对图像没有任何作用(每个像素的偏移量<< 1).

有任何想法吗?

Ble*_*der 3

我最终实现了自己的解决方案:

for x in xrange(cv.GetSize(image)[0]):
  for y in xrange(cv.GetSize(image)[1]):

    a = Point(x, y, heights_top[x / 2] * (cv.GetSize(image)[1] - y) + heights_bottom[x / 2] * y)
    b = Point()

    d.x = costy * (sintz * (a.y - c.y) + costz * (a.x - c.x)) - sinty * (a.z - c.z)
    d.y = sintx * (costy * (a.z - c.z) + sinty * (sintz * (a.y - c.y) + costz * (a.x - c.x))) + costx * (costz * (a.y - c.y) - sintz * (a.x - c.x))
    d.z = costx * (costy * (a.z - c.z) + sinty * (sintz * (a.y - c.y) + costz * (a.x - c.x))) - sintx * (costz * (a.y - c.y) - sintz * (a.x - c.x))

    mapX[y, x] = x + 100.0 * (d.x - e.x) * (e.z / d.z)
    mapY[y, x] = y + 100.0 * (d.y - e.y) * (e.z / d.z)


print
print 'Remapping original image using map...'

remapped = cv.CreateImage(cv.GetSize(image), 8, 3)
cv.Remap(image, remapped, mapX, mapY, cv.CV_INTER_LINEAR)
Run Code Online (Sandbox Code Playgroud)

这(缓慢地)使用该函数重新映射每个像素cv.Remap,这似乎有点工作......