我在我的iPhone应用程序中使用CGBitmapContextCreateImage时遇到问题.
我正在使用AV Foundation Framework使用此方法抓取相机帧:
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer,0);
uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGImageRef newImage = CGBitmapContextCreateImage(newContext);
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
CGContextRelease(newContext);
CGColorSpaceRelease(colorSpace);
UIImage *image= [UIImage imageWithCGImage:newImage scale:1.0 orientation:UIImageOrientationRight];
self.imageView.image = image;
CGImageRelease(newImage);
}
Run Code Online (Sandbox Code Playgroud)
但是,我在调试控制台中看到一个错误:
<Error>: CGDataProviderCreateWithCopyOfData: vm_copy failed: status 2.
Run Code Online (Sandbox Code Playgroud)
有没有人见过这个?通过注释掉我的问题,我将问题排成一行:
CGImageRef newImage = CGBitmapContextCreateImage(newContext);
Run Code Online (Sandbox Code Playgroud)
但我不知道如何摆脱它.从功能上讲,它很棒.很明显,CGImage正在创建,但我需要知道是什么导致错误,所以它不会影响其他部分. …
我有一个应用程序捕获kCVPixelFormatType_420YpCbCr8BiPlanarFullRange格式的实时视频来处理Y通道.根据Apple的文档:
kCVPixelFormatType_420YpCbCr8BiPlanarFullRange双平面分量Y'CbCr 8位4:2:0,全范围(亮度= [0,255]色度= [1,255]).baseAddr指向big-endian CVPlanarPixelBufferInfo_YCbCrBiPlanar结构.
我想在UIViewController中展示一些这些帧,是否有任何API可以转换为kCVPixelFormatType_32BGRA格式?您能给出一些提示来调整Apple提供的这种方法吗?
// Create a UIImage from sample buffer data
- (UIImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer {
// Get a CMSampleBuffer's Core Video image buffer for the media data
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// Lock the base address of the pixel buffer
CVPixelBufferLockBaseAddress(imageBuffer, 0);
// Get the number of bytes per row for the pixel buffer
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
// Get the number of bytes per row for the pixel buffer
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer); …Run Code Online (Sandbox Code Playgroud) 我正在尝试将 YUV 图像转换为 CIIMage 并最终转换为 UIImage。我在这些方面相当新手,并试图找出一种简单的方法来做到这一点。据我所知,从 iOS6 YUV 可以直接用于创建 CIImage 但当我试图创建它时,CIImage 只持有一个 nil 值。我的代码是这样的->
NSLog(@"Started DrawVideoFrame\n");
CVPixelBufferRef pixelBuffer = NULL;
CVReturn ret = CVPixelBufferCreateWithBytes(
kCFAllocatorDefault, iWidth, iHeight, kCVPixelFormatType_420YpCbCr8BiPlanarFullRange,
lpData, bytesPerRow, 0, 0, 0, &pixelBuffer
);
if(ret != kCVReturnSuccess)
{
NSLog(@"CVPixelBufferRelease Failed");
CVPixelBufferRelease(pixelBuffer);
}
NSDictionary *opt = @{ (id)kCVPixelBufferPixelFormatTypeKey :
@(kCVPixelFormatType_420YpCbCr8BiPlanarFullRange) };
CIImage *cimage = [CIImage imageWithCVPixelBuffer:pixelBuffer options:opt];
NSLog(@"CURRENT CIImage -> %p\n", cimage);
UIImage *image = [UIImage imageWithCIImage:cimage scale:1.0 orientation:UIImageOrientationUp];
NSLog(@"CURRENT UIImage -> %p\n", image);
Run Code Online (Sandbox Code Playgroud)
这里 lpData 是 YUV 数据,它是一个无符号字符数组。
这看起来也很有趣: …