我对这个感到疯狂 - 到处寻找并尝试了我能想到的一切.
我正在制作一款使用AVFoundation的iPhone应用程序 - 特别是使用iPhone相机捕捉视频的AVCapture.
我需要在录制中包含的视频源上叠加自定义图像.
到目前为止,我已经设置了AVCapture会话,可以显示提要,访问框架,将其保存为UIImage并将叠加图像放到其上.然后将这个新的UIImage转换为CVPixelBufferRef.为了仔细检查bufferRef是否正常工作,我将它转换回UIImage,它仍然可以显示图像.
当我尝试将CVPixelBufferRef转换为CMSampleBufferRef以附加到AVCaptureSessions assetWriterInput时,麻烦就开始了.CMSampleBufferRef在我尝试创建它时总是返回NULL.
这是 - (void)captureOutput函数
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
{
UIImage *botImage = [self imageFromSampleBuffer:sampleBuffer];
UIImage *wheel = [self imageFromView:wheelView];
UIImage *finalImage = [self overlaidImage:botImage :wheel];
//[previewImage setImage:finalImage]; <- works -- the image is being merged into one UIImage
CVPixelBufferRef pixelBuffer = NULL;
CGImageRef cgImage = CGImageCreateCopy(finalImage.CGImage);
CFDataRef image = CGDataProviderCopyData(CGImageGetDataProvider(cgImage));
int status = CVPixelBufferCreateWithBytes(NULL,
self.view.bounds.size.width,
self.view.bounds.size.height,
kCVPixelFormatType_32BGRA,
(void*)CFDataGetBytePtr(image),
CGImageGetBytesPerRow(cgImage),
NULL,
0,
NULL,
&pixelBuffer);
if(status == 0){
OSStatus result = …
Run Code Online (Sandbox Code Playgroud)