Tha*_*Guy 5 core-graphics avfoundation ipad ios
很长一段时间stackoverflow读者,第一次海报.
我正在尝试创建一个名为CloudWriter的iPad应用.应用程序的概念是绘制您在云中看到的形状.下载应用程序后,在启动CloudWriter后,将向用户呈现实时视频背景(来自后置摄像头),其上面有一个OpenGL绘图层.用户可以打开应用程序,将iPad指向天空中的云,并绘制他们在显示屏上看到的内容.
该应用程序的一个主要特征是用户记录会话期间显示器上发生的事情的视频屏幕捕获.实时视频输入和"绘图"视图将成为平面(合并)视频.
关于当前如何工作的一些假设和背景信息.
在这一点上,想法是用户可以将iPad3相机指向天空中的某些云,并绘制他们看到的形状.此功能完美无瑕.当我尝试对用户会话进行"平面"视频屏幕捕获时,我开始遇到性能问题.由此产生的"平面"视频会使摄像机输入与用户绘图实时重叠.
类似于我们正在寻找的功能的应用程序的一个很好的例子是Board Cam,可在App Store中找到.
要启动此过程,视图中始终会显示"记录"按钮.当用户点击记录按钮时,期望是在再次点击记录按钮之前,会话将被记录为"平面"视频屏幕捕获.
当用户点击"记录"按钮时,代码中会发生以下情况
AVCaptureSessionPreset从AVCaptureSessionPresetMedium更改为AVCaptureSessionPresetPhoto,允许访问
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
Run Code Online (Sandbox Code Playgroud)didOutputSampleBuffer开始获取数据并从当前视频缓冲区数据创建图像.它通过调用来完成此操作
- (UIImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer
Run Code Online (Sandbox Code Playgroud)
应用程序根视图控制器开始覆盖drawRect以创建展平图像,用作最终视频中的单个帧.
为了创建一个平面图像,用作单独的框架,在根ViewController的drawRect函数中,我们获取AVCamCaptureManager的didOutputSampleBuffer代码接收的最后一帧.那是在下面
- (void) drawRect:(CGRect)rect {
NSDate* start = [NSDate date];
CGContextRef context = [self createBitmapContextOfSize:self.frame.size];
//not sure why this is necessary...image renders upside-down and mirrored
CGAffineTransform flipVertical = CGAffineTransformMake(1, 0, 0, -1, 0, self.frame.size.height);
CGContextConcatCTM(context, flipVertical);
if( isRecording)
[[self.layer presentationLayer] renderInContext:context];
CGImageRef cgImage = CGBitmapContextCreateImage(context);
UIImage* background = [UIImage imageWithCGImage: cgImage];
CGImageRelease(cgImage);
UIImage *bottomImage = background;
if(((AVCamCaptureManager *)self.captureManager).currentImage != nil && isVideoBGActive )
{
UIImage *image = [((AVCamCaptureManager *)self.mainContentScreen.captureManager).currentImage retain];//[UIImage
CGSize newSize = background.size;
UIGraphicsBeginImageContext( newSize );
// Use existing opacity as is
if( isRecording )
{
if( [self.mainContentScreen isVideoBGActive] && _recording)
{
[image drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
}
// Apply supplied opacity
[bottomImage drawInRect:CGRectMake(0,0,newSize.width,newSize.height) blendMode:kCGBlendModeNormal alpha:1.0];
}
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
self.currentScreen = newImage;
[image release];
}
if (isRecording) {
float millisElapsed = [[NSDate date] timeIntervalSinceDate:startedAt] * 1000.0;
[self writeVideoFrameAtTime:CMTimeMake((int)millisElapsed, 1000)];
}
float processingSeconds = [[NSDate date] timeIntervalSinceDate:start];
float delayRemaining = (1.0 / self.frameRate) - processingSeconds;
CGContextRelease(context);
//redraw at the specified framerate
[self performSelector:@selector(setNeedsDisplay) withObject:nil afterDelay:delayRemaining > 0.0 ? delayRemaining : 0.01];
}
Run Code Online (Sandbox Code Playgroud)
createBitmapContextOfSize如下
- (CGContextRef) createBitmapContextOfSize:(CGSize) size {
CGContextRef context = NULL;
CGColorSpaceRef colorSpace = nil;
int bitmapByteCount;
int bitmapBytesPerRow;
bitmapBytesPerRow = (size.width * 4);
bitmapByteCount = (bitmapBytesPerRow * size.height);
colorSpace = CGColorSpaceCreateDeviceRGB();
if (bitmapData != NULL) {
free(bitmapData);
}
bitmapData = malloc( bitmapByteCount );
if (bitmapData == NULL) {
fprintf (stderr, "Memory not allocated!");
CGColorSpaceRelease( colorSpace );
return NULL;
}
context = CGBitmapContextCreate (bitmapData,
size.width ,
size.height,
8, // bits per component
bitmapBytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedFirst);
CGContextSetAllowsAntialiasing(context,NO);
if (context== NULL) {
free (bitmapData);
fprintf (stderr, "Context not created!");
CGColorSpaceRelease( colorSpace );
return NULL;
}
//CGAffineTransform transform = CGAffineTransformIdentity;
//transform = CGAffineTransformScale(transform, size.width * .25, size.height * .25);
//CGAffineTransformScale(transform, 1024, 768);
CGColorSpaceRelease( colorSpace );
return context;
}
Run Code Online (Sandbox Code Playgroud)
- (void)captureOutput:didOutputSampleBuffer fromConnection
// Delegate routine that is called when a sample buffer was written
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
{
// Create a UIImage from the sample buffer data
[self imageFromSampleBuffer:sampleBuffer];
}
Run Code Online (Sandbox Code Playgroud)
- (UIImage*)imageFromSampleBuffer:(CMSampleBufferRef)sampleBuffer下面
// Create a UIImage from sample buffer data - modifed not to return a UIImage *, rather store it in self.currentImage
- (UIImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer
{
// unlock the memory, do other stuff, but don't forget:
// Get a CMSampleBuffer's Core Video image buffer for the media data
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// Lock the base address of the pixel buffer
CVPixelBufferLockBaseAddress(imageBuffer, 0);
// uint8_t *tmp = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer);
int bytes = CVPixelBufferGetBytesPerRow(imageBuffer); // determine number of bytes from height * bytes per row
//void *baseAddress = malloc(bytes);
size_t height = CVPixelBufferGetHeight(imageBuffer);
uint8_t *baseAddress = malloc( bytes * height );
memcpy( baseAddress, CVPixelBufferGetBaseAddress(imageBuffer), bytes * height );
size_t width = CVPixelBufferGetWidth(imageBuffer);
// Create a device-dependent RGB color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// Create a bitmap graphics context with the sample buffer data
CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8,
bytes, colorSpace, kCGBitmapByteOrderDefault | kCGImageAlphaPremultipliedFirst);
// CGContextScaleCTM(context, 0.25, 0.25); //scale down to size
// Create a Quartz image from the pixel data in the bitmap graphics context
CGImageRef quartzImage = CGBitmapContextCreateImage(context);
// Unlock the pixel buffer
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
// Free up the context and color space
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
free(baseAddress);
self.currentImage = [UIImage imageWithCGImage:quartzImage scale:0.25 orientation:UIImageOrientationUp];
// Release the Quartz image
CGImageRelease(quartzImage);
return nil;
}
Run Code Online (Sandbox Code Playgroud)
最后,我使用writeVideoFrameAtTime:CMTimeMake将其写入磁盘,代码如下:
-(void) writeVideoFrameAtTime:(CMTime)time {
if (![videoWriterInput isReadyForMoreMediaData]) {
NSLog(@"Not ready for video data");
}
else {
@synchronized (self) {
UIImage* newFrame = [self.currentScreen retain];
CVPixelBufferRef pixelBuffer = NULL;
CGImageRef cgImage = CGImageCreateCopy([newFrame CGImage]);
CFDataRef image = CGDataProviderCopyData(CGImageGetDataProvider(cgImage));
if( image == nil )
{
[newFrame release];
CVPixelBufferRelease( pixelBuffer );
CGImageRelease(cgImage);
return;
}
int status = CVPixelBufferPoolCreatePixelBuffer(kCFAllocatorDefault, avAdaptor.pixelBufferPool, &pixelBuffer);
if(status != 0){
//could not get a buffer from the pool
NSLog(@"Error creating pixel buffer: status=%d", status);
}
// set image data into pixel buffer
CVPixelBufferLockBaseAddress( pixelBuffer, 0 );
uint8_t* destPixels = CVPixelBufferGetBaseAddress(pixelBuffer);
CFDataGetBytes(image, CFRangeMake(0, CFDataGetLength(image)), destPixels); //XXX: will work if the pixel buffer is contiguous and has the same bytesPerRow as the input data
if(status == 0){
BOOL success = [avAdaptor appendPixelBuffer:pixelBuffer withPresentationTime:time];
if (!success)
NSLog(@"Warning: Unable to write buffer to video");
}
//clean up
[newFrame release];
CVPixelBufferUnlockBaseAddress( pixelBuffer, 0 );
CVPixelBufferRelease( pixelBuffer );
CFRelease(image);
CGImageRelease(cgImage);
}
}
}
Run Code Online (Sandbox Code Playgroud)
一旦isRecording设置为YES,iPad 3的性能就会从大约20FPS变为5FPS.使用Insturments,我能够看到以下代码块(来自drawRect :)是导致性能下降到无法使用的级别的原因.
if( _recording )
{
if( [self.mainContentScreen isVideoBGActive] && _recording)
{
[image drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
}
// Apply supplied opacity
[bottomImage drawInRect:CGRectMake(0,0,newSize.width,newSize.height) blendMode:kCGBlendModeNormal alpha:1.0];
}
Run Code Online (Sandbox Code Playgroud)
这是我的理解,因为我正在捕捉全屏,我们失去了"drawInRect"应该给予的所有好处.具体来说,我在谈论更快的重绘,因为理论上,我们只更新显示的一小部分(传入CGRect).再次,全屏捕捉,我不确定drawInRect可以提供几乎同样多的好处.
为了提高性能,我想如果我要缩小imageFromSampleBuffer提供的图像和绘图视图的当前上下文,我会看到帧速率的增加.不幸的是,CoreGrapics.Framework不是我过去曾经使用的东西,所以我不知道我能够有效地将性能调整到可接受的水平.
任何CoreGraphics Guru都有输入?
此外,ARC关闭了一些代码,分析仪显示一个泄漏,但我认为这是一个误报.
即将推出,CloudWriter ™,天空是极限!
如果您想要良好的记录性能,您将需要避免使用 Core Graphics 重新绘制内容。坚持使用纯 OpenGL ES。
您说您已经在 OpenGL ES 中完成了手指绘画,因此您应该能够将其渲染为纹理。实时视频源也可以定向到纹理。从那里,您可以根据手指绘画纹理中的 Alpha 通道对两者进行叠加混合。
使用 OpenGL ES 2.0 着色器可以很容易地做到这一点。事实上,如果您从绘画代码中提供渲染的纹理,我的GPUImage开源框架可以处理视频捕获和混合部分(请参阅 FilterShowcase 示例应用程序以获取覆盖在视频上的图像的示例)。您必须确保绘画使用的是 OpenGL ES 2.0,而不是 1.1,并且它具有与 GPUImage OpenGL ES 上下文相同的共享组,但我展示了如何在 CubeExample 应用程序中执行此操作。
我还通过使用可用的纹理缓存(在 iOS 5.0+ 上)以高性能方式在 GPUImage 中为您处理视频录制。
通过使用类似我的框架并留在 OpenGL ES 中,您应该能够以稳定的 30 FPS 录制 720p 视频 (iPad 2) 或 1080p 视频 (iPad 3) 的混合。