在python中加载图像的一部分

Dan*_*ein 16 python numpy scipy python-imaging-library

这可能是一个愚蠢的问题,但......

我有几千个图像,我想加载到Python然后转换为numpy数组.显然这有点慢.但是,我实际上只对每张图片的一小部分感兴趣.(相同的部分,图像中心只有100x100像素.)

有没有办法加载图像的一部分,以使事情变得更快?

下面是一些示例代码,我生成一些示例图像,保存并重新加载.

import numpy as np
import matplotlib.pyplot as plt
import Image, time

#Generate sample images
num_images = 5

for i in range(0,num_images):
    Z = np.random.rand(2000,2000)
    print 'saving %i'%i
    plt.imsave('%03i.png'%i,Z)

%load the images
for i in range(0,num_images):
    t = time.time()

    im = Image.open('%03i.png'%i)
    w,h = im.size
    imc = im.crop((w-50,h-50,w+50,h+50))

    print 'Time to open: %.4f seconds'%(time.time()-t)

    #convert them to numpy arrays
    data = np.array(imc)
Run Code Online (Sandbox Code Playgroud)

Aud*_*ics 7

虽然在单个线程中你不能比PIL裁剪快得多,但你可以使用多个内核来加速一切!:)

我在我的8核i7机器上运行了以下代码,以及我的7岁,两核,仅2ghz笔记本电脑.两者都看到了运行时间的显着改善.正如您所期望的那样,改进取决于可用内核的数量.

您的代码的核心是相同的,我只是将循环与实际计算分开,以便该函数可以并行应用于值列表.

所以这:

for i in range(0,num_images):
    t = time.time()

    im = Image.open('%03i.png'%i)
    w,h = im.size
    imc = im.crop((w-50,h-50,w+50,h+50))

    print 'Time to open: %.4f seconds'%(time.time()-t)

    #convert them to numpy arrays
    data = np.array(imc)
Run Code Online (Sandbox Code Playgroud)

变成了:

def convert(filename):  
    im = Image.open(filename)
    w,h = im.size
    imc = im.crop((w-50,h-50,w+50,h+50))
    return numpy.array(imc)
Run Code Online (Sandbox Code Playgroud)

加速的关键是库的Pool功能multiprocessing.它使跨多个处理器运行的事情变得微不足道.

完整代码:

import os 
import time
import numpy 
from PIL import Image
from multiprocessing import Pool 

# Path to where my test images are stored
img_folder = os.path.join(os.getcwd(), 'test_images')

# Collects all of the filenames for the images
# I want to process
images = [os.path.join(img_folder,f) 
        for f in os.listdir(img_folder)
        if '.jpeg' in f]

# Your code, but wrapped up in a function       
def convert(filename):  
    im = Image.open(filename)
    w,h = im.size
    imc = im.crop((w-50,h-50,w+50,h+50))
    return numpy.array(imc)

def main():
    # This is the hero of the code. It creates pool of 
    # worker processes across which you can "map" a function
    pool = Pool()

    t = time.time()
    # We run it normally (single core) first
    np_arrays = map(convert, images)
    print 'Time to open %i images in single thread: %.4f seconds'%(len(images), time.time()-t)

    t = time.time()
    # now we run the same thing, but this time leveraging the worker pool.
    np_arrays = pool.map(convert, images)
    print 'Time to open %i images with multiple threads: %.4f seconds'%(len(images), time.time()-t)

if __name__ == '__main__':
    main()
Run Code Online (Sandbox Code Playgroud)

很基本的.只需要几行额外的代码,并进行一些重构就可以将转换位移动到自己的函数中.结果不言自明:

结果:

8核i7

Time to open 858 images in single thread: 6.0040 seconds
Time to open 858 images with multiple threads: 1.4800 seconds
Run Code Online (Sandbox Code Playgroud)

2核Intel Duo

Time to open 858 images in single thread: 8.7640 seconds
Time to open 858 images with multiple threads: 4.6440 seconds
Run Code Online (Sandbox Code Playgroud)

所以你去!即使您拥有一台超级旧的2核机器,您也可以将打开和处理图像的时间减半.

注意事项

记忆.如果您正在处理1000张图像,那么您可能会在某些时候弹出Pythons Memory限制.要解决这个问题,您只需要以块的形式处理数据.您仍然可以利用所有的多处理优势,只需更小的利益.就像是:

for i in range(0, len(images), chunk_size): 
    results = pool.map(convert, images[i : i+chunk_size]) 
    # rest of code. 
Run Code Online (Sandbox Code Playgroud)


Cla*_*diu 6

将文件保存为未压缩的24位BMP.它们以非常规则的方式存储像素数据.从Wikipedia查看此图表的"图像数据"部分.请注意,图中的大部分复杂性仅来自标题:

BMP文件格式

例如,假设您正在存储此图像(此处显示为放大):

2x2方形图像

这是像素数据部分的样子,如果它存储为24位未压缩的BMP.请注意,由于某种原因,数据是自下而上存储的,而是以BGR格式而不是RGB格式存储,因此文件中的第一行是图像的最底行,第二行是第二个最底部的行,等等:

00 00 FF    FF FF FF    00 00
FF 00 00    00 FF 00    00 00
Run Code Online (Sandbox Code Playgroud)

该数据解释如下:

           |  First column  |  Second Column  |  Padding
-----------+----------------+-----------------+-----------
Second Row |  00 00 FF      |  FF FF FF       |  00 00
-----------+----------------+-----------------+-----------
First Row  |  FF 00 00      |  00 FF 00       |  00 00
-----------+----------------+-----------------+-----------
Run Code Online (Sandbox Code Playgroud)

要么:

           |  First column  |  Second Column  |  Padding
-----------+----------------+-----------------+-----------
Second Row |  red           |  white          |  00 00
-----------+----------------+-----------------+-----------
First Row  |  blue          |  green          |  00 00
-----------+----------------+-----------------+-----------
Run Code Online (Sandbox Code Playgroud)

填充用于将行大小填充为4个字节的倍数.


因此,您所要做的就是为这种特定的文件格式实现一个阅读器,然后计算必须开始和停止读取每一行的字节偏移量:

def calc_bytes_per_row(width, bytes_per_pixel):
    res = width * bytes_per_pixel
    if res % 4 != 0:
        res += 4 - res % 4
    return res

def calc_row_offsets(pixel_array_offset, bmp_width, bmp_height, x, y, row_width):
    if x + row_width > bmp_width:
        raise ValueError("This is only for calculating offsets within a row")

    bytes_per_row = calc_bytes_per_row(bmp_width, 3)
    whole_row_offset = pixel_array_offset + bytes_per_row * (bmp_height - y - 1)
    start_row_offset = whole_row_offset + x * 3
    end_row_offset = start_row_offset + row_width * 3
    return (start_row_offset, end_row_offset)
Run Code Online (Sandbox Code Playgroud)

然后你只需要处理正确的字节偏移.例如,假设您要在10000x10000位图中读取位于500x500位置的400x400块:

def process_row_bytes(row_bytes):
    ... some efficient way to process the bytes ...

bmpf = open(..., "rb")
pixel_array_offset = ... extract from bmp header ...
bmp_width = 10000
bmp_height = 10000
start_x = 500
start_y = 500
end_x = 500 + 400
end_y = 500 + 400

for cur_y in xrange(start_y, end_y):
    start, end = calc_row_offsets(pixel_array_offset, 
                                  bmp_width, bmp_height, 
                                  start_x, cur_y, 
                                  end_x - start_x)
    bmpf.seek(start)
    cur_row_bytes = bmpf.read(end - start)
    process_row_bytes(cur_row_bytes)
Run Code Online (Sandbox Code Playgroud)

请注意,处理字节的方式很重要.您可以使用PIL做一些聪明的事情,只是将像素数据转储到其中,但我不完全确定.如果以低效的方式进行,则可能不值得.如果速度是一个巨大的问题,您可以考虑使用pyrex编写它或在C中实现上面的内容并从Python调用它.