Old*_*her 0 iphone optimization uiimagepickercontroller
我在我的应用程序中使用UIImagePickerController.有没有人使用任何优化技巧来拍照延迟?我不需要将它们存储到库中.我只想捕获像素数据并在进行计算后销毁图像对象.
此外,还有一种方法可以在镜头加载时隐藏它吗?
(最坏的情况是,我可以在相机启动时屏蔽镜头并在保存/计算时屏蔽冻结的图像.)
编辑: 我已经设置好了showsCameraControls = NO;.这会隐藏快照之间的镜头效果,但不会影响相机启动时镜头动画的存在.
你是否与UIImagePickerController结合了?从iOS 4开始,AVFoundation允许您以任何支持的视频分辨率从相机接收实时图像流,没有指定的用户界面,因此没有镜头效果,而在iPhone 4上,您可以获取高达720p的图像.潜伏; 在早期的设备上,你可以获得480p.
从这里可以获得WWDC 2010视频的会话409 是一个很好的起点.您将要创建AVCaptureSession,通过AVCaptureDeviceInput附加合适的AVCaptureDevice,添加AVCaptureVideoDataOutput并为其提供一个调度队列,以便将数据传输给您.最终会得到一个CVImageBufferRef,它直接暴露原始像素数据.
编辑:Apple的示例代码似乎缺失,我倾向于大致使用以下内容:
AVCaptureSession *session;
AVCaptureDevice *device;
AVCaptureVideoDataOutput *output;
// create a capture session
session = [[AVCaptureSession alloc] init];
session.sessionPreset = ...frame quality you want...;
// grab the default video device (which will be the back camera on a device
// with two), create an input for it to the capture session
NSError *error = nil;
device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
AVCaptureDeviceInput *input = [AVCaptureDeviceInput
deviceInputWithDevice:device error:&error];
// connect the two
[session addInput:input];
// create an object to route output back here
output = [[AVCaptureVideoDataOutput alloc] init];
[session addOutput:output];
// create a suitable dispatch queue, GCD style, and hook
// self up as the delegate
dispatch_queue_t queue = dispatch_queue_create(NULL, NULL);
[output setSampleBufferDelegate:self queue:queue];
dispatch_release(queue);
// set 32bpp BGRA pixel format
output.videoSettings =
[NSDictionary
dictionaryWithObject:
[NSNumber numberWithInt:kCVPixelFormatType_32BGRA]
forKey:(id)kCVPixelBufferPixelFormatTypeKey];
[session startRunning];
Run Code Online (Sandbox Code Playgroud)
然后,这将开始将CMSampleBuffers传递给您的captureOutput:didOutputSampleBuffer:fromConnection:在您创建的调度队列上(即,一个单独的线程).显然,与上述相比,生产代码将具有更多的理智和结果检查.
下面的示例代码采用包含视频帧的传入CMSampleBuffer并将其转换为CGImage,然后将其发送到主线程,在我的测试代码中,它被转换为UIImage并设置为UIImageView中的内容,证明整件事情正在发挥作用:
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// lock momentarily, to get enough details to create a CGImage in the future...
CVPixelBufferLockBaseAddress(imageBuffer, 0);
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
// create a CGImageRef
CGColorSpaceRef colourSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef contextRef =
CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colourSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGImageRef imageRef = CGBitmapContextCreateImage(contextRef);
CGContextRelease(contextRef);
CGColorSpaceRelease(colourSpace);
[self performSelectorOnMainThread:@selector(postCGImage:) withObject:[NSValue valueWithPointer:imageRef] waitUntilDone:YES];
CGImageRelease(imageRef);
Run Code Online (Sandbox Code Playgroud)
为了举个例子,我把通常用来接收视频帧和视频控制器的东西的对象混为一谈; 希望我没有犯任何错误.
| 归档时间: |
|
| 查看次数: |
1987 次 |
| 最近记录: |