AVAssetWriter减速

Ste*_*ten 1 avfoundation ios avassetwriter

我正在使用AVAssetWriter来保存来自摄像头的实时信息.这很适合使用此代码

- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer  fromConnection:(AVCaptureConnection *)connection{ 

 CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
 CMTime lastSampleTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);

 if(videoWriter.status != AVAssetWriterStatusWriting){
    [videoWriter startWriting];
    [videoWriter startSessionAtSourceTime:lastSampleTime];
 }

 if(adaptor.assetWriterInput.readyForMoreMediaData) [adaptor appendPixelBuffer:imageBuffer withPresentationTime:lastSampleTime];
 else NSLog(@"adaptor not ready",);
}
Run Code Online (Sandbox Code Playgroud)

我通常接近30 fps(但是其他人注意到iPhone 4s上的速度不是60 fps),而当定时[适配器appendPixelBuffer]它只需要几毫秒.

但是,我不需要全帧,但我需要高质量(低压缩,每帧关键帧),我将在几次后读回一个过程.因此,我想在写作之前裁剪图像.幸运的是我只需要一个中间的条带,所以我可以做一个简单的缓冲区memcpy.为此,我创建了一个CVPixelBufferRef,我正在使用适配器进行复制和写入:

- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer  fromConnection:(AVCaptureConnection *)connection{ 

 CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
 CMTime lastSampleTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);

 if(videoWriter.status != AVAssetWriterStatusWriting){
    [videoWriter startWriting];
    [videoWriter startSessionAtSourceTime:lastSampleTime];
 }

 CVPixelBufferLockBaseAddress(imageBuffer,0);
 size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
 size_t width = CVPixelBufferGetWidth(imageBuffer);
 size_t height = CVPixelBufferGetHeight(imageBuffer);
 void * buffIn = CVPixelBufferGetBaseAddress(imageBuffer);

 CVPixelBufferRef pxbuffer = NULL;
 CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, width, height, kCVPixelFormatType_32BGRA, nil, &pxbuffer);

 NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);
 CVPixelBufferLockBaseAddress(pxbuffer, 0);

  void *buffOut = CVPixelBufferGetBaseAddress(pxbuffer);
  NSParameterAssert(buffOut != NULL);

  //Copy the whole buffer while testing
  memcpy(buffOut, buffIn, width * height * 4); 
  //memcpy(buffOut, buffIn+sidecrop, width * 100 * 4); 

  if (adaptor.assetWriterInput.readyForMoreMediaData) [adaptor appendPixelBuffer:pxbuffer withPresentationTime:lastSampleTime];
  else NSLog(@"adaptor not ready");

   CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
   CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
}
Run Code Online (Sandbox Code Playgroud)

这也有效,视频看起来还不错.然而,它非常慢并且帧速率变得不可接受.奇怪的是,最大的减速不是复制,而是[适配器appendPixelBuffer]步骤现在比以前长10-100倍.所以我猜它不喜欢我创建的pxbuffer,但我明白为什么.在设置视频输出和适配器时,我正在使用kCVPixelFormatType_32BGRA.

任何人都可以建议更好的方法来进行复制/裁剪吗?你能直接在ImageBuffer上做到吗?

Ste*_*ten 5

我找到了解决方案.在iOS5(我错过了更新)中,您可以设置AVAssetWriter来裁剪您的视频(如Steve所述).将AVVideoScalingModeKey设置为AVVideoScalingModeResizeAspectFill

videoWriter = [[AVAssetWriter alloc] initWithURL:filmurl 
                                        fileType:AVFileTypeQuickTimeMovie 
                                           error:&error];  
NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys: 
     AVVideoCodecH264, AVVideoCodecKey, 
     [NSNumber numberWithInt:1280], AVVideoWidthKey,  
     [NSNumber numberWithInt:200], AVVideoHeightKey,
     AVVideoScalingModeResizeAspectFill, AVVideoScalingModeKey,// This turns the
                                                               // scale into a crop
     nil]; 
videoWriterInput = [[AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo 
                                                       outputSettings:videoSettings] retain];
Run Code Online (Sandbox Code Playgroud)