裁剪UIImage到alpha

Mik*_*keQ 9 core-graphics cgimage ipad ios4 ios

我有一个相当大的,几乎全屏的图像,我将在iPad上显示.图像大约80%透明.我需要在客户端上确定不透明像素的边界框,然后裁剪到该边界框.

在StackOverflow上扫描其他问题并阅读一些CoreGraphics文档,我想我可以通过以下方式完成此任务:

CGBitmapContextCreate(...) // Use this to render the image to a byte array

 ..
   - iterate through this byte array to find the bounding box
 ..

CGImageCreateWithImageInRect(image, boundingRect);
Run Code Online (Sandbox Code Playgroud)

这似乎非常低效和笨重.有没有什么聪明的我可以使用CGImage蒙版或利用设备的图形加速来做到这一点?

Ste*_*AIS 13

感谢user404709完成了所有艰苦的工作.下面的代码还处理视网膜图像并释放CFDataRef.

- (UIImage *)trimmedImage {

    CGImageRef inImage = self.CGImage;
    CFDataRef m_DataRef;
    m_DataRef = CGDataProviderCopyData(CGImageGetDataProvider(inImage));

    UInt8 * m_PixelBuf = (UInt8 *) CFDataGetBytePtr(m_DataRef);

    size_t width = CGImageGetWidth(inImage);
    size_t height = CGImageGetHeight(inImage);

    CGPoint top,left,right,bottom;

    BOOL breakOut = NO;
    for (int x = 0;breakOut==NO && x < width; x++) {
        for (int y = 0; y < height; y++) {
            int loc = x + (y * width);
            loc *= 4;
            if (m_PixelBuf[loc + 3] != 0) {
                left = CGPointMake(x, y);
                breakOut = YES;
                break;
            }
        }
    }

    breakOut = NO;
    for (int y = 0;breakOut==NO && y < height; y++) {

        for (int x = 0; x < width; x++) {

            int loc = x + (y * width);
            loc *= 4;
            if (m_PixelBuf[loc + 3] != 0) {
                top = CGPointMake(x, y);
                breakOut = YES;
                break;
            }

        }
    }

    breakOut = NO;
    for (int y = height-1;breakOut==NO && y >= 0; y--) {

        for (int x = width-1; x >= 0; x--) {

            int loc = x + (y * width);
            loc *= 4;
            if (m_PixelBuf[loc + 3] != 0) {
                bottom = CGPointMake(x, y);
                breakOut = YES;
                break;
            }

        }
    }

    breakOut = NO;
    for (int x = width-1;breakOut==NO && x >= 0; x--) {

        for (int y = height-1; y >= 0; y--) {

            int loc = x + (y * width);
            loc *= 4;
            if (m_PixelBuf[loc + 3] != 0) {
                right = CGPointMake(x, y);
                breakOut = YES;
                break;
            }

        }
    }


    CGFloat scale = self.scale;

    CGRect cropRect = CGRectMake(left.x / scale, top.y/scale, (right.x - left.x)/scale, (bottom.y - top.y) / scale);
    UIGraphicsBeginImageContextWithOptions( cropRect.size,
                                           NO,
                                           scale);
    [self drawAtPoint:CGPointMake(-cropRect.origin.x, -cropRect.origin.y)
           blendMode:kCGBlendModeCopy
               alpha:1.];
    UIImage *croppedImage = UIGraphicsGetImageFromCurrentImageContext();
    UIGraphicsEndImageContext();
    CFRelease(m_DataRef);
    return croppedImage;
}
Run Code Online (Sandbox Code Playgroud)


Mat*_*een 8

我在UImage上创建了一个类别,如果有人需要它,它会这样做...

+ (UIImage *)cropTransparencyFromImage:(UIImage *)img {

    CGImageRef inImage = img.CGImage;           
    CFDataRef m_DataRef;  
    m_DataRef = CGDataProviderCopyData(CGImageGetDataProvider(inImage));  
    UInt8 * m_PixelBuf = (UInt8 *) CFDataGetBytePtr(m_DataRef);  

    int width = img.size.width;
    int height = img.size.height;

    CGPoint top,left,right,bottom;

    BOOL breakOut = NO;
    for (int x = 0;breakOut==NO && x < width; x++) {
        for (int y = 0; y < height; y++) {
            int loc = x + (y * width);
            loc *= 4;
            if (m_PixelBuf[loc + 3] != 0) {
                left = CGPointMake(x, y);
                breakOut = YES;
                break;
            }
        }
    }

    breakOut = NO;
    for (int y = 0;breakOut==NO && y < height; y++) {

        for (int x = 0; x < width; x++) {

            int loc = x + (y * width);
            loc *= 4;
            if (m_PixelBuf[loc + 3] != 0) {
                top = CGPointMake(x, y);
                breakOut = YES;
                break;
            }

        }
    }

    breakOut = NO;
    for (int y = height-1;breakOut==NO && y >= 0; y--) {

        for (int x = width-1; x >= 0; x--) {

            int loc = x + (y * width);
            loc *= 4;
            if (m_PixelBuf[loc + 3] != 0) {
                bottom = CGPointMake(x, y);
                breakOut = YES;
                break;
            }

        }
    }

    breakOut = NO;
    for (int x = width-1;breakOut==NO && x >= 0; x--) {

        for (int y = height-1; y >= 0; y--) {

            int loc = x + (y * width);
            loc *= 4;
            if (m_PixelBuf[loc + 3] != 0) {
                right = CGPointMake(x, y);
                breakOut = YES;
                break;
            }

        }
    }


    CGRect cropRect = CGRectMake(left.x, top.y, right.x - left.x, bottom.y - top.y);

    UIGraphicsBeginImageContextWithOptions( cropRect.size,
                                           NO,
                                           0.);
    [img drawAtPoint:CGPointMake(-cropRect.origin.x, -cropRect.origin.y)
           blendMode:kCGBlendModeCopy
               alpha:1.];
    UIImage *croppedImage = UIGraphicsGetImageFromCurrentImageContext();
    UIGraphicsEndImageContext();
    return croppedImage;
}
Run Code Online (Sandbox Code Playgroud)


Mr.*_*rna 0

没有什么聪明的办法可以让设备完成这项工作,但有一些方法可以加速任务,或尽量减少对用户界面的影响。

首先,考虑加速这项任务的必要性。对该字节数组的简单迭代可能足够快。如果应用程序每次运行时只计算一次或响应用户的选择(在选择之间至少需要几秒钟),则可能不需要投资优化此任务。

如果图像可用后一段时间内不需要边界框,则可以在单独的线程中启动此迭代。这样计算就不会阻塞主界面线程。Grand Central Dispatch 可以使使用单独的线程来完成此任务变得更容易。

如果任务必须加速,也许这是视频图像的实时处理,那么数据的并行处理可能会有所帮助。Accelerate 框架可能有助于对数据设置 SIMD 计算。或者,为了通过此迭代真正获得性能,使用 NEON SIMD 操作的 ARM 汇编语言代码可以通过大量的开发工作获得出色的结果。

最后的选择是研究更好的算法。在检测图像中的特征方面有大量的工作。边缘检测算法可能比通过字节数组的简单迭代更快。也许Apple将来会在Core Graphics中添加边缘检测功能,这可以应用于这种情况。Apple 实现的图像处理功能可能与这种情况不完全匹配,但 Apple 的实现应该优化以使用 iPad 的 SIMD 或 GPU 功能,从而获得更好的整体性能。