将重叠的图像彼此移动以获得准确的差异

mit*_*hil 3 python opencv image-comparison imagemagick image-processing

我想获取使用相机捕获的打印图像的差异。

我使用python库尝试了许多解决方案:opencv,image-magic等。

我发现用于图像比较的解决方案具有更高的准确性:

  1. 移动图像:从左到右,寻找差异最小。
  2. 从左向右移动图像,寻找最小差异。
  3. 移动图像:从上到下,寻找最小的差异。
  4. 移动图像:从下到上,寻找最小的差异。

拍摄图像的条件:1. 相机永远不会移动(安装在固定架上)。2.将对象手动放置在白板上,因此永远不会正确对齐对象。(每次角度都是轻微的变化,因为它是手动的)

使用相机针对以下代码捕获的图像样本:

图像样本1:白色圆点:

图片1作为原始图片

图像样本2:作为原始图像

图像2与白色溺爱

图像样本3:黑点

在此处输入图片说明

带有白点打印的可接受输出不可用,但只能标记差异(缺陷):

接受的输出

目前,我正在使用以下Image-magic命令来实现图像差异:

compare -highlight-color black -fuzz 5% -metric AE Image_1.png Image_2.png -compose src diff.png
Run Code Online (Sandbox Code Playgroud)

代码:

compare -highlight-color black -fuzz 5% -metric AE Image_1.png Image_2.png -compose src diff.png
Run Code Online (Sandbox Code Playgroud)

差异之后的输出是不正确的,因为比较工作在像素间进行,因此仅标记实际差异还不够聪明:

输出

我提到的上述解决方案将能够获得所需的差异作为输出,但是没有库或image-magic命令可用于此类图像比较。

任何python代码或Image-magic命令可以做到这一点?

Ali*_*asi 6

It seems you are doing some defect detection task. The first solution comes in my mind is the image registration technique. First try to take the images in the same conditions (lighting, camera angle and ...) (one of your provided images is bigger 2 pixels).

Then you should register two images and match one to the other one, like this

在此处输入图片说明

Then wrap them with the help of homography matrix, and generate an aligned image, in this case, the result is like this:

在此处输入图片说明

Then take the difference of aligned image with the query image and threshold it, the result:

在此处输入图片说明

As I said if you try to take your frames with more precise, the registration result will be better and cause more accurate performance.

The codes for each part: (mostly taken from here).

import cv2
import numpy as np


MAX_FEATURES = 1000
GOOD_MATCH_PERCENT = 0.5


def alignImages(im1, im2):
    # Convert images to grayscale
    im1Gray = cv2.cvtColor(im1, cv2.COLOR_BGR2GRAY)
    im2Gray = cv2.cvtColor(im2, cv2.COLOR_BGR2GRAY)

    # Detect ORB features and compute descriptors.
    orb = cv2.ORB_create(MAX_FEATURES)
    keypoints1, descriptors1 = orb.detectAndCompute(im1Gray, None)
    keypoints2, descriptors2 = orb.detectAndCompute(im2Gray, None)

    # Match features.
    matcher = cv2.DescriptorMatcher_create(cv2.DESCRIPTOR_MATCHER_BRUTEFORCE_HAMMING)
    matches = matcher.match(descriptors1, descriptors2, None)

    # Sort matches by score
    matches.sort(key=lambda x: x.distance, reverse=False)

    # Remove not so good matches
    numGoodMatches = int(len(matches) * GOOD_MATCH_PERCENT)
    matches = matches[:numGoodMatches]

    # Draw top matches
    imMatches = cv2.drawMatches(im1, keypoints1, im2, keypoints2, matches, None)
    cv2.imwrite("matches.jpg", imMatches)

    # Extract location of good matches
    points1 = np.zeros((len(matches), 2), dtype=np.float32)
    points2 = np.zeros((len(matches), 2), dtype=np.float32)

    for i, match in enumerate(matches):
        points1[i, :] = keypoints1[match.queryIdx].pt
        points2[i, :] = keypoints2[match.trainIdx].pt

    # Find homography
    h, mask = cv2.findHomography(points1, points2, cv2.RANSAC)

    # Use homography
    height, width, channels = im2.shape
    im1Reg = cv2.warpPerspective(im1, h, (width, height))

    return im1Reg 
if __name__ == '__main__':

  # Read reference image
  refFilename = "vv9gFl.jpg" 
  imFilename =  "uP3CYl.jpg" 
  imReference = cv2.imread(refFilename, cv2.IMREAD_COLOR) 
  im = cv2.imread(imFilename, cv2.IMREAD_COLOR) 

  # Registered image will be resotred in imReg. 
  # The estimated homography will be stored in h. 
  imReg = alignImages(im, imReference)

  # Write aligned image to disk. 
  outFilename = "aligned.jpg" 
  cv2.imwrite(outFilename, imReg) 
Run Code Online (Sandbox Code Playgroud)

for image difference and thresholding: alined = cv2.imread("aligned.jpg" , 0) alined = alined[:, :280]

b = cv2.imread("vv9gFl.jpg", 0 )
b = b[:, :280]

print (alined.shape)
print (b.shape)

diff = cv2.absdiff(alined, b)
cv2.imwrite("diff.png", diff)

threshold = 25
alined[np.where(diff >  threshold)] = 255
alined[np.where(diff <= threshold)] = 0

cv2.imwrite("threshold.png", diff) 
Run Code Online (Sandbox Code Playgroud)

If you have lots of images and want to do defect detecting task I suggest using Denoising Autoencoder to train a deep artificial neural network. Read more here.