如何在目标C,iOS中扩展/调整CVPixelBufferRef

Jam*_*mes 9 iphone image objective-c ios

我正在尝试将图像从CVPixelBufferRef调整为299x299.理想情况下也会裁剪图像.原始像素缓冲区为640x320,目标是缩放/裁剪为299x299而不会丢失纵横比(裁剪到中心).

我发现代码在目标c中调整UIImage的大小,但没有调整CVPixelBufferRef的大小.我发现了对象C的各种非常复杂的例子,许多不同的图像类型,但没有专门用于调整CVPixelBufferRef的大小.

最简单/最好的方法是什么,请包含确切的代码.

...我尝试了selton的答案,但这不起作用,因为缩放缓冲区中的结果类型不正确(进入断言代码),

OSType sourcePixelFormat = CVPixelBufferGetPixelFormatType(pixelBuffer);
  int doReverseChannels;
  if (kCVPixelFormatType_32ARGB == sourcePixelFormat) {
    doReverseChannels = 1;
  } else if (kCVPixelFormatType_32BGRA == sourcePixelFormat) {
    doReverseChannels = 0;
  } else {
    assert(false);  // Unknown source format
  }
Run Code Online (Sandbox Code Playgroud)

all*_*enh 9

CoreMLHelpers为灵感。我们可以创建一个C函数来满足您的需求。根据您的像素格式要求,我认为此解决方案将是最有效的选择。我用AVCaputureVideoDataOutput进行测试。

我希望这有帮助!

AVCaptureVideoDataOutputSampleBufferDelegate实施。这里的大部分工作是创建一个中心裁剪矩形。利用AVMakeRectWithAspectRatioInsideRect是关键(它确实可以实现您想要的)。

- (void)captureOutput:(AVCaptureOutput *)output didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection; {

    CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
    if (pixelBuffer == NULL) { return; }

    size_t height = CVPixelBufferGetHeight(pixelBuffer);
    size_t width = CVPixelBufferGetWidth(pixelBuffer);

    CGRect videoRect = CGRectMake(0, 0, width, height);
    CGSize scaledSize = CGSizeMake(299, 299);

    // Create a rectangle that meets the output size's aspect ratio, centered in the original video frame
    CGRect centerCroppingRect = AVMakeRectWithAspectRatioInsideRect(scaledSize, videoRect);

    CVPixelBufferRef croppedAndScaled = createCroppedPixelBuffer(pixelBuffer, centerCroppingRect, scaledSize);

    // Do other things here
    // For example
    CIImage *image = [CIImage imageWithCVImageBuffer:croppedAndScaled];
    // End example

    CVPixelBufferRelease(croppedAndScaled);
}
Run Code Online (Sandbox Code Playgroud)

方法1:数据处理和加速

此功能的基本前提是它首先裁剪为指定的矩形,然后缩放为最终所需的大小。通过简单地忽略矩形外部的数据即可实现裁剪。缩放是通过Accelerate的vImageScale_ARGB8888功能实现的。再次感谢CoreMLHelpers您的见解。

void assertCropAndScaleValid(CVPixelBufferRef pixelBuffer, CGRect cropRect, CGSize scaleSize) {
    CGFloat originalWidth = (CGFloat)CVPixelBufferGetWidth(pixelBuffer);
    CGFloat originalHeight = (CGFloat)CVPixelBufferGetHeight(pixelBuffer);

    assert(CGRectContainsRect(CGRectMake(0, 0, originalWidth, originalHeight), cropRect));
    assert(scaleSize.width > 0 && scaleSize.height > 0);
}

void pixelBufferReleaseCallBack(void *releaseRefCon, const void *baseAddress) {
    if (baseAddress != NULL) {
        free((void *)baseAddress);
    }
}

// Returns a CVPixelBufferRef with +1 retain count
CVPixelBufferRef createCroppedPixelBuffer(CVPixelBufferRef sourcePixelBuffer, CGRect croppingRect, CGSize scaledSize) {

    OSType inputPixelFormat = CVPixelBufferGetPixelFormatType(sourcePixelBuffer);
    assert(inputPixelFormat == kCVPixelFormatType_32BGRA
           || inputPixelFormat == kCVPixelFormatType_32ABGR
           || inputPixelFormat == kCVPixelFormatType_32ARGB
           || inputPixelFormat == kCVPixelFormatType_32RGBA);

    assertCropAndScaleValid(sourcePixelBuffer, croppingRect, scaledSize);

    if (CVPixelBufferLockBaseAddress(sourcePixelBuffer, kCVPixelBufferLock_ReadOnly) != kCVReturnSuccess) {
        NSLog(@"Could not lock base address");
        return nil;
    }

    void *sourceData = CVPixelBufferGetBaseAddress(sourcePixelBuffer);
    if (sourceData == NULL) {
        NSLog(@"Error: could not get pixel buffer base address");
        CVPixelBufferUnlockBaseAddress(sourcePixelBuffer, kCVPixelBufferLock_ReadOnly);
        return nil;
    }

    size_t sourceBytesPerRow = CVPixelBufferGetBytesPerRow(sourcePixelBuffer);
    size_t offset = CGRectGetMinY(croppingRect) * sourceBytesPerRow + CGRectGetMinX(croppingRect) * 4;

    vImage_Buffer croppedvImageBuffer = {
        .data = ((char *)sourceData) + offset,
        .height = (vImagePixelCount)CGRectGetHeight(croppingRect),
        .width = (vImagePixelCount)CGRectGetWidth(croppingRect),
        .rowBytes = sourceBytesPerRow
    };

    size_t scaledBytesPerRow = scaledSize.width * 4;
    void *scaledData = malloc(scaledSize.height * scaledBytesPerRow);
    if (scaledData == NULL) {
        NSLog(@"Error: out of memory");
        CVPixelBufferUnlockBaseAddress(sourcePixelBuffer, kCVPixelBufferLock_ReadOnly);
        return nil;
    }

    vImage_Buffer scaledvImageBuffer = {
        .data = scaledData,
        .height = (vImagePixelCount)scaledSize.height,
        .width = (vImagePixelCount)scaledSize.width,
        .rowBytes = scaledBytesPerRow
    };

    /* The ARGB8888, ARGB16U, ARGB16S and ARGBFFFF functions work equally well on
     * other channel orderings of 4-channel images, such as RGBA or BGRA.*/
    vImage_Error error = vImageScale_ARGB8888(&croppedvImageBuffer, &scaledvImageBuffer, nil, 0);
    CVPixelBufferUnlockBaseAddress(sourcePixelBuffer, kCVPixelBufferLock_ReadOnly);

    if (error != kvImageNoError) {
        NSLog(@"Error: %ld", error);
        free(scaledData);
        return nil;
    }

    OSType pixelFormat = CVPixelBufferGetPixelFormatType(sourcePixelBuffer);
    CVPixelBufferRef outputPixelBuffer = NULL;
    CVReturn status = CVPixelBufferCreateWithBytes(nil, scaledSize.width, scaledSize.height, pixelFormat, scaledData, scaledBytesPerRow, pixelBufferReleaseCallBack, nil, nil, &outputPixelBuffer);

    if (status != kCVReturnSuccess) {
        NSLog(@"Error: could not create new pixel buffer");
        free(scaledData);
        return nil;
    }

    return outputPixelBuffer;
}
Run Code Online (Sandbox Code Playgroud)

方法2:CoreImage

该方法更易于阅读,并且具有与您传入的像素缓冲区格式完全不可知的优势,这在某些使用情况下是一个加分项。当然,您仅限于CoreImage支持的格式。

CVPixelBufferRef createCroppedPixelBufferCoreImage(CVPixelBufferRef pixelBuffer,
                                                   CGRect cropRect,
                                                   CGSize scaleSize,
                                                   CIContext *context) {

    assertCropAndScaleValid(pixelBuffer, cropRect, scaleSize);

    CIImage *image = [CIImage imageWithCVImageBuffer:pixelBuffer];
    image = [image imageByCroppingToRect:cropRect];

    CGFloat scaleX = scaleSize.width / CGRectGetWidth(image.extent);
    CGFloat scaleY = scaleSize.height / CGRectGetHeight(image.extent);

    image = [image imageByApplyingTransform:CGAffineTransformMakeScale(scaleX, scaleY)];

    // Due to the way [CIContext:render:toCVPixelBuffer] works, we need to translate the image so the cropped section is at the origin
    image = [image imageByApplyingTransform:CGAffineTransformMakeTranslation(-image.extent.origin.x, -image.extent.origin.y)];

    CVPixelBufferRef output = NULL;

    CVPixelBufferCreate(nil,
                        CGRectGetWidth(image.extent),
                        CGRectGetHeight(image.extent),
                        CVPixelBufferGetPixelFormatType(pixelBuffer),
                        nil,
                        &output);

    if (output != NULL) {
        [context render:image toCVPixelBuffer:output];
    }

    return output;
}
Run Code Online (Sandbox Code Playgroud)

创建CIContext可以在调用站点完成,也可以创建并存储在属性中。有关选项的信息,请参见文档

// Create a CIContext using default settings, this will
// typically use the GPU and Metal by default if supported
if (self.context == nil) {
    self.context = [CIContext context];
}
Run Code Online (Sandbox Code Playgroud)


小智 -1

您可以考虑使用CIImage

CIImage *image = [CIImage imageWithCVPixelBuffer:pxbuffer];
CIImage *scaledImage = [image imageByApplyingTransform:(CGAffineTransformMakeScale(0.1, 0.1))];
CVPixelBufferRef scaledBuf = [scaledImage pixelBuffer];
Run Code Online (Sandbox Code Playgroud)

您应该更改比例以适合您的目标尺寸。