从iphone上的视频输出中获取静止图像?

Oli*_*ver 2 iphone objective-c

我正在编写一个应用程序来显示iphone相机所看到的光线条件的统计数据.我每秒拍摄一张图像,并对其进行计算.

要捕获图像,我使用以下方法:

-(void) captureNow
{
    AVCaptureConnection *videoConnection = nil;
    for (AVCaptureConnection *connection in captureManager.stillImageOutput.connections)
    {
        for (AVCaptureInputPort *port in [connection inputPorts])
        {
            if ([[port mediaType] isEqual:AVMediaTypeVideo] )
            {
                videoConnection = connection;
                break;
            }
        }
        if (videoConnection) { break; }
    }

    [captureManager.stillImageOutput captureStillImageAsynchronouslyFromConnection:videoConnection completionHandler: ^(CMSampleBufferRef imageSampleBuffer, NSError *error)
     {   
         NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageSampleBuffer];
         latestImage = [[UIImage alloc] initWithData:imageData];
     }];
}
Run Code Online (Sandbox Code Playgroud)

然而,该captureStillImageAsynhronously....方法导致手机播放"快门"声音,这对我的应用程序没有好处,因为它将不断捕获图像.

我已经读过,无法禁用此声音效果.相反,我想从手机的视频输入中捕获帧:

AVCaptureDeviceInput *newVideoInput = [[AVCaptureDeviceInput alloc] initWithDevice:[self backFacingCamera] error:nil];
Run Code Online (Sandbox Code Playgroud)

并希望将这些变成UIImage对象.

我怎么做到这一点?我不太了解AVFoundation的工作原理 - 我下载了一些示例代码并为我的目的修改了它.

Bra*_*son 5

不要使用静态相机.相反,从设备的摄像机中抓取并处理响应于AVCaptureVideoDataOutputSampleBufferDelegate而获得的像素缓冲区中包含的数据.

您可以使用以下代码设置视频连接:

// Grab the back-facing camera
AVCaptureDevice *backFacingCamera = nil;
NSArray *devices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
for (AVCaptureDevice *device in devices) 
{
    if ([device position] == AVCaptureDevicePositionBack) 
    {
        backFacingCamera = device;
    }
}

// Create the capture session
captureSession = [[AVCaptureSession alloc] init];

// Add the video input  
NSError *error = nil;
videoInput = [[[AVCaptureDeviceInput alloc] initWithDevice:backFacingCamera error:&error] autorelease];
if ([captureSession canAddInput:videoInput]) 
{
    [captureSession addInput:videoInput];
}

// Add the video frame output   
videoOutput = [[AVCaptureVideoDataOutput alloc] init];
[videoOutput setAlwaysDiscardsLateVideoFrames:YES];
[videoOutput setVideoSettings:[NSDictionary dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_32BGRA] forKey:(id)kCVPixelBufferPixelFormatTypeKey]];

[videoOutput setSampleBufferDelegate:self queue:dispatch_get_main_queue()];

if ([captureSession canAddOutput:videoOutput])
{
    [captureSession addOutput:videoOutput];
}
else
{
    NSLog(@"Couldn't add video output");
}

// Start capturing
[captureSession setSessionPreset:AVCaptureSessionPreset640x480];
if (![captureSession isRunning])
{
    [captureSession startRunning];
};
Run Code Online (Sandbox Code Playgroud)

然后,您需要在委托方法中处理这些帧,如下所示:

- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
    CVImageBufferRef cameraFrame = CMSampleBufferGetImageBuffer(sampleBuffer);
    CVPixelBufferLockBaseAddress(cameraFrame, 0);
    int bufferHeight = CVPixelBufferGetHeight(cameraFrame);
    int bufferWidth = CVPixelBufferGetWidth(cameraFrame);

        // Process pixel buffer bytes here

    CVPixelBufferUnlockBaseAddress(cameraFrame, 0);
}
Run Code Online (Sandbox Code Playgroud)

BGRA图像的原始像素字节将包含在从...开始的数组中CVPixelBufferGetBaseAddress(cameraFrame).您可以迭代这些以获得所需的值.

但是,您会发现在CPU上对整个映像执行的任何操作都会有点慢.您可以使用"加速"来帮助进行平均颜色操作,就像您想要的一样.我曾经vDSP_meanv()在过去使用平均亮度值,一旦你有阵列中的亮度值.对于类似的东西,你可能最好从相机中获取YUV平面数据,而不是我在这里下拉的BGRA值.

我还编写了一个开源框架来处理使用OpenGL ES的视频,虽然我还没有完整的图像缩减操作,就像你需要这种图像分析一样.我的直方图生成器可能是我最接近你想做的事情.