Ale*_*one 19 iphone objective-c avfoundation uiview
这似乎是一项简单的任务,但它让我疯狂.是否可以将包含AVCaptureVideoPreviewLayer的UIView作为子图层转换为要保存的图像?我想创建一个增强现实叠加层,并有一个按钮将图片保存到相机胶卷.按住电源按钮+主页键可将屏幕截图捕获到相机胶卷,这意味着我的所有捕获逻辑都在工作,并且任务是可行的.但我似乎无法以编程方式使其工作.
我正在使用AVCaptureVideoPreviewLayer捕捉相机图像的实时预览.我渲染图像的所有尝试都失败了:
previewLayer = [AVCaptureVideoPreviewLayer layerWithSession:captureSession];
//start the session, etc...
//this saves a white screen
- (IBAction)saveOverlay:(id)sender {
NSLog(@"saveOverlay");
UIGraphicsBeginImageContext(appDelegate.window.bounds.size);
UIGraphicsBeginImageContext(scrollView.frame.size);
[previewLayer.presentationLayer renderInContext:UIGraphicsGetCurrentContext()];
// [appDelegate.window.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *screenshot = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImageWriteToSavedPhotosAlbum(screenshot, self,
@selector(image:didFinishSavingWithError:contextInfo:), nil);
}
Run Code Online (Sandbox Code Playgroud)
//这会渲染一切,除了预览图层,这是空白的.
[appDelegate.window.layer renderInContext:UIGraphicsGetCurrentContext()];
Run Code Online (Sandbox Code Playgroud)
我在某处读过这可能是由于iPhone的安全问题.这是真的?
需要明确的是:我不想保存相机的图像.我想保存叠加在另一个图像上的透明预览图层,从而创建透明度.但由于某种原因,我无法使其发挥作用.
Jas*_*ues 17
我喜欢@ Roma的使用GPU Image的建议 - 很棒的主意....但是如果你想要一个纯粹的CocoaTouch方法,这里是做什么的:
实现AVCaptureVideoDataOutputSampleBufferDelegate
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
{
// Create a UIImage+Orientation from the sample buffer data
if (_captureFrame)
{
[captureSession stopRunning];
_captureFrame = NO;
UIImage *image = [ImageTools imageFromSampleBuffer:sampleBuffer];
image = [image rotate:UIImageOrientationRight];
_frameCaptured = YES;
if (delegate != nil)
{
[delegate cameraPictureTaken:image];
}
}
}
Run Code Online (Sandbox Code Playgroud)
捕获如下:
+ (UIImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer
{
// Get a CMSampleBuffer's Core Video image buffer for the media data
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// Lock the base address of the pixel buffer
CVPixelBufferLockBaseAddress(imageBuffer, 0);
// Get the number of bytes per row for the pixel buffer
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
// Get the number of bytes per row for the pixel buffer
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
// Get the pixel buffer width and height
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
// Create a device-dependent RGB color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// Create a bitmap graphics context with the sample buffer data
CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8,
bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
// Create a Quartz image from the pixel data in the bitmap graphics context
CGImageRef quartzImage = CGBitmapContextCreateImage(context);
// Unlock the pixel buffer
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
// Free up the context and color space
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
// Create an image object from the Quartz image
UIImage *image = [UIImage imageWithCGImage:quartzImage];
// Release the Quartz image
CGImageRelease(quartzImage);
return (image);
}
Run Code Online (Sandbox Code Playgroud)
将UIImage与叠加层混合
捕获新的UIView
+ (UIImage*)imageWithView:(UIView*)view
{
UIGraphicsBeginImageContextWithOptions(view.bounds.size, view.opaque, [UIScreen mainScreen].scale);
[view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage* img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return img;
}
Run Code Online (Sandbox Code Playgroud)
我可以建议你尝试 GPU Image。
https://github.com/BradLarson/GPUImage
它使用openGL,所以速度相当快。它可以处理来自相机的图片并向它们添加过滤器(有很多),包括边缘检测、运动检测等等
它类似于 OpenCV,但根据我自己的经验,GPU 图像更容易与您的项目连接,并且语言是 Objective-C。
如果您决定使用 box2d 进行物理处理,则可能会出现问题 - 也使用 openGl,您将需要花费一些时间,直到这 2 个框架停止战斗))