Omr*_*dan 9 iphone cocoa-touch objective-c uikit ios4
我正在编写一个iphone(IOS 4)程序,它可以从摄像头捕获实时视频并实时处理.
我更喜欢用kCVPixelFormatType_420YpCbCr8BiPlanarFullRange格式捕获以便于处理(我需要处理Y通道).如何以这种格式显示数据?我想我需要以某种方式将其转换为UIImage,然后将其放入一些ImageView?
目前我有代码显示kCVPixelFormatType_32BGRA数据,但自然它不适用于kCVPixelFormatType_420YpCbCr8BiPlanarFullRange.
这是我现在用于转换的代码,有关如何对kCVPixelFormatType_420YpCbCr8BiPlanarFullRange执行相同操作的任何帮助/示例将不胜感激.(也批评我目前的方法).
// Create a UIImage from sample buffer data
- (UIImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer
{
// Get a CMSampleBuffer's Core Video image buffer for the media data
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// Lock the base address of the pixel buffer
CVPixelBufferLockBaseAddress(imageBuffer, 0);
// Get the number of bytes per row for the pixel buffer
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
// Get the number of bytes per row for the pixel buffer
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
// Get the pixel buffer width and height
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
// Create a device-dependent RGB color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// Create a bitmap graphics context with the sample buffer data
CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8,
bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
// Create a Quartz image from the pixel data in the bitmap graphics context
CGImageRef quartzImage = CGBitmapContextCreateImage(context);
// Unlock the pixel buffer
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
// Free up the context and color space
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
// Create an image object from the Quartz image
UIImage *image = [UIImage imageWithCGImage:quartzImage];
// Release the Quartz image
CGImageRelease(quartzImage);
return (image);
}
Run Code Online (Sandbox Code Playgroud)
回答我自己的问题.这解决了我遇到的问题(这是抓住yuv输出,显示并处理它),虽然它不完全是问题的答案:
要从相机获取YUV输出:
AVCaptureVideoDataOutput *videoOut = [[AVCaptureVideoDataOutput alloc] init];
[videoOut setAlwaysDiscardsLateVideoFrames:YES];
[videoOut setVideoSettings:[NSDictionary dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange] forKey:(id)kCVPixelBufferPixelFormatTypeKey]];
Run Code Online (Sandbox Code Playgroud)
要按原样显示,请使用AVCaptureVideoPreviewLayer,它不需要任何代码.(例如,您可以在WWDC示例包中看到FindMyiCon示例).
要处理YUV y通道(在这种情况下是双平面,所以它只在一个块中,你也可以使用memcpy而不是循环):
- (void)processPixelBuffer: (CVImageBufferRef)pixelBuffer {
CVPixelBufferLockBaseAddress( pixelBuffer, 0 );
int bufferHeight = CVPixelBufferGetHeight(pixelBuffer);
int bufferWidth = CVPixelBufferGetWidth(pixelBuffer);
// allocate space for ychannel, reallocating as needed.
if (bufferWidth != y_channel.width || bufferHeight != y_channel.height)
{
if (y_channel.data) free(y_channel.data);
y_channel.width = bufferWidth;
y_channel.height = bufferHeight;
y_channel.data = malloc(y_channel.width * y_channel.height);
}
uint8_t *yc = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0);
int total = bufferWidth * bufferHeight;
for(int k=0;k<total;k++)
{
y_channel.data[k] = yc[k++]; // copy y channel
}
CVPixelBufferUnlockBaseAddress( pixelBuffer, 0 );
}
Run Code Online (Sandbox Code Playgroud)
| 归档时间: |
|
| 查看次数: |
9823 次 |
| 最近记录: |