我正在使用AVFoundation构建应用程序.
就在我打电话[assetWriterInput appendSampleBuffer:sampleBuffer]的
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection-方法.
我操纵样本缓冲区中的像素(使用pixelbuffer来应用效果).
但是客户希望我在框架上输入文本(timestamp&framecounter),但我还没有找到办法.
我试图将samplebuffer转换为Image,在图像上应用文本,然后将图像转换回samplebuffer,但是
CMSampleBufferDataIsReady(sampleBuffer)
Run Code Online (Sandbox Code Playgroud)
失败.
这是我的UIImage类别方法:
+ (UIImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer
{
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer,0);
uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGImageRef newImage = CGBitmapContextCreateImage(newContext);
CGContextRelease(newContext);
CGColorSpaceRelease(colorSpace);
UIImage *newUIImage = [UIImage imageWithCGImage:newImage];
CFRelease(newImage);
return newUIImage;
}
Run Code Online (Sandbox Code Playgroud)
和
- (CMSampleBufferRef) …Run Code Online (Sandbox Code Playgroud)