Mic*_*ien 7 iphone xcode avfoundation avcapture avcapturesession
我对这个感到疯狂 - 到处寻找并尝试了我能想到的一切.
我正在制作一款使用AVFoundation的iPhone应用程序 - 特别是使用iPhone相机捕捉视频的AVCapture.
我需要在录制中包含的视频源上叠加自定义图像.
到目前为止,我已经设置了AVCapture会话,可以显示提要,访问框架,将其保存为UIImage并将叠加图像放到其上.然后将这个新的UIImage转换为CVPixelBufferRef.为了仔细检查bufferRef是否正常工作,我将它转换回UIImage,它仍然可以显示图像.
当我尝试将CVPixelBufferRef转换为CMSampleBufferRef以附加到AVCaptureSessions assetWriterInput时,麻烦就开始了.CMSampleBufferRef在我尝试创建它时总是返回NULL.
这是 - (void)captureOutput函数
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
{
UIImage *botImage = [self imageFromSampleBuffer:sampleBuffer];
UIImage *wheel = [self imageFromView:wheelView];
UIImage *finalImage = [self overlaidImage:botImage :wheel];
//[previewImage setImage:finalImage]; <- works -- the image is being merged into one UIImage
CVPixelBufferRef pixelBuffer = NULL;
CGImageRef cgImage = CGImageCreateCopy(finalImage.CGImage);
CFDataRef image = CGDataProviderCopyData(CGImageGetDataProvider(cgImage));
int status = CVPixelBufferCreateWithBytes(NULL,
self.view.bounds.size.width,
self.view.bounds.size.height,
kCVPixelFormatType_32BGRA,
(void*)CFDataGetBytePtr(image),
CGImageGetBytesPerRow(cgImage),
NULL,
0,
NULL,
&pixelBuffer);
if(status == 0){
OSStatus result = 0;
CMVideoFormatDescriptionRef videoInfo = NULL;
result = CMVideoFormatDescriptionCreateForImageBuffer(NULL, pixelBuffer, &videoInfo);
NSParameterAssert(result == 0 && videoInfo != NULL);
CMSampleBufferRef myBuffer = NULL;
result = CMSampleBufferCreateForImageBuffer(kCFAllocatorDefault,
pixelBuffer, true, NULL, NULL, videoInfo, NULL, &myBuffer);
NSParameterAssert(result == 0 && myBuffer != NULL);//always null :S
NSLog(@"Trying to append");
if (!CMSampleBufferDataIsReady(myBuffer)){
NSLog(@"sampleBuffer data is not ready");
return;
}
if (![assetWriterInput isReadyForMoreMediaData]){
NSLog(@"Not ready for data :(");
return;
}
if (![assetWriterInput appendSampleBuffer:myBuffer]){
NSLog(@"Failed to append pixel buffer");
}
}
}
Run Code Online (Sandbox Code Playgroud)
我一直听到的另一个解决方案是使用AVAssetWriterInputPixelBufferAdaptor,它不需要进行凌乱的CMSampleBufferRef包装.但是,我已经搜索了堆叠和苹果开发人员论坛和文档,但无法找到关于如何设置或如何使用它的清晰描述或示例.如果有人有一个可行的例子,请你告诉我或者帮我解决上面的问题 - 已经连续工作了一个星期并且在智慧结束时.
如果您需要任何其他信息,请告诉我
提前致谢,
迈克尔
你需要AVAssetWriterInputPixelBufferAdaptor,这里是创建它的代码:
// Create dictionary for pixel buffer adaptor
NSDictionary *bufferAttributes = [NSDictionary dictionaryWithObjectsAndKeys:[NSNumber numberWithInt:kCVPixelFormatType_32BGRA], kCVPixelBufferPixelFormatTypeKey, nil];
// Create pixel buffer adaptor
m_pixelsBufferAdaptor = [[AVAssetWriterInputPixelBufferAdaptor alloc] initWithAssetWriterInput:assetWriterInput sourcePixelBufferAttributes:bufferAttributes];
Run Code Online (Sandbox Code Playgroud)
以及使用它的代码:
// If ready to have more media data
if (m_pixelsBufferAdaptor.assetWriterInput.readyForMoreMediaData) {
// Create a pixel buffer
CVPixelBufferRef pixelsBuffer = NULL;
CVPixelBufferPoolCreatePixelBuffer(NULL, m_pixelsBufferAdaptor.pixelBufferPool, &pixelsBuffer);
// Lock pixel buffer address
CVPixelBufferLockBaseAddress(pixelsBuffer, 0);
// Create your function to set your pixels data in the buffer (in your case, fill with your finalImage data)
[self yourFunctionToPutDataInPixelBuffer:CVPixelBufferGetBaseAddress(pixelsBuffer)];
// Unlock pixel buffer address
CVPixelBufferUnlockBaseAddress(pixelsBuffer, 0);
// Append pixel buffer (calculate currentFrameTime with your needing, the most simplest way is to have a frame time starting at 0 and increment each time you write a frame with the time of a frame (inverse of your framerate))
[m_pixelsBufferAdaptor appendPixelBuffer:pixelsBuffer withPresentationTime:currentFrameTime];
// Release pixel buffer
CVPixelBufferRelease(pixelsBuffer);
}
Run Code Online (Sandbox Code Playgroud)
并且不要忘记释放您的pixelsBufferAdaptor.
归档时间: |
|
查看次数: |
8732 次 |
最近记录: |