如何提高图像质量?

008*_*ran 0 python ocr image image-processing

我正在制作一个读取ID卡的OCR。通过使用YOLO获得感兴趣的区域,然后将该裁剪的区域交给tesseract进行阅读。由于这些裁切后的图像非常小且模糊,因此tesseract无法读取它们。它还给出了错误的预测,这很烦人!我认为通过提高裁剪图像的图像质量可以解决问题。

裁剪区域之一。在此处输入图片说明

有什么方法可以改善此类图像?

sch*_*kje 6

@vasilisg的答案。是一个非常好的解决方案。对此进行进一步改进的一种方法是使用形态学打开操作去除残留的斑点。但是,这仅适用于小于图像中数字的线厚的斑点。另一个选择是使用openCV连接的组件模块删除少于N个像素的“岛”。例如,您可以这样做,如下所示:

# External libraries used for
# Image IO
from PIL import Image

# Morphological filtering
from skimage.morphology import opening
from skimage.morphology import disk

# Data handling
import numpy as np

# Connected component filtering
import cv2

black = 0
white = 255
threshold = 160

# Open input image in grayscale mode and get its pixels.
img = Image.open("image.jpg").convert("LA")
pixels = np.array(img)[:,:,0]

# Remove pixels above threshold
pixels[pixels > threshold] = white
pixels[pixels < threshold] = black


# Morphological opening
blobSize = 1 # Select the maximum radius of the blobs you would like to remove
structureElement = disk(blobSize)  # you can define different shapes, here we take a disk shape
# We need to invert the image such that black is background and white foreground to perform the opening
pixels = np.invert(opening(np.invert(pixels), structureElement))


# Create and save new image.
newImg = Image.fromarray(pixels).convert('RGB')
newImg.save("newImage1.PNG")

# Find the connected components (black objects in your image)
# Because the function searches for white connected components on a black background, we need to invert the image
nb_components, output, stats, centroids = cv2.connectedComponentsWithStats(np.invert(pixels), connectivity=8)

# For every connected component in your image, you can obtain the number of pixels from the stats variable in the last
# column. We remove the first entry from sizes, because this is the entry of the background connected component
sizes = stats[1:,-1]
nb_components -= 1

# Define the minimum size (number of pixels) a component should consist of
minimum_size = 100

# Create a new image
newPixels = np.ones(pixels.shape)*255

# Iterate over all components in the image, only keep the components larger than minimum size
for i in range(1, nb_components):
    if sizes[i] > minimum_size:
        newPixels[output == i+1] = 0

# Create and save new image.
newImg = Image.fromarray(newPixels).convert('RGB')
newImg.save("newImage2.PNG")
Run Code Online (Sandbox Code Playgroud)

在此示例中,我同时执行了打开和连接组件方法,但是,如果使用连接组件方法,通常可以省略打开操作。

结果看起来像这样:

开启阈值后: 在此处输入图片说明

阈值化后,打开并连接组件过滤: 阈值,打开和连接的组件过滤后的图像


Vas*_* G. 5

一种方法是将图像转换为灰度,然后使用阈值与每个像素进行比较,以决定该图像是黑色还是白色。枕头是可用于此类处理的库:

from PIL import Image

black = (0,0,0)
white = (255,255,255)
threshold = (160,160,160)

# Open input image in grayscale mode and get its pixels.
img = Image.open("image.jpg").convert("LA")
pixels = img.getdata()

newPixels = []

# Compare each pixel 
for pixel in pixels:
    if pixel < threshold:
        newPixels.append(black)
    else:
        newPixels.append(white)

# Create and save new image.
newImg = Image.new("RGB",img.size)
newImg.putdata(newPixels)
newImg.save("newImage.jpg")
Run Code Online (Sandbox Code Playgroud)

结果图片:

在此处输入图片说明