我正在使用AVFoundation构建应用程序.
就在我打电话[assetWriterInput appendSampleBuffer:sampleBuffer]的
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection-方法.
我操纵样本缓冲区中的像素(使用pixelbuffer来应用效果).
但是客户希望我在框架上输入文本(timestamp&framecounter),但我还没有找到办法.
我试图将samplebuffer转换为Image,在图像上应用文本,然后将图像转换回samplebuffer,但是
CMSampleBufferDataIsReady(sampleBuffer)
Run Code Online (Sandbox Code Playgroud)
失败.
这是我的UIImage类别方法:
+ (UIImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer
{
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer,0);
uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGImageRef newImage = CGBitmapContextCreateImage(newContext);
CGContextRelease(newContext);
CGColorSpaceRelease(colorSpace);
UIImage *newUIImage = [UIImage imageWithCGImage:newImage];
CFRelease(newImage);
return newUIImage;
}
Run Code Online (Sandbox Code Playgroud)
和
- (CMSampleBufferRef) …Run Code Online (Sandbox Code Playgroud) 我正在尝试使用AVFoundation的AVCaptureVideoDataOutput在我正在录制的视频上添加水印/徽标.我的类被设置为sampleBufferDelegate并接收CMSamplebufferRefs.我已经将一些效果应用于CMSampleBufferRefs CVPixelBuffer并将其传递回AVAssetWriter.
左上角的徽标使用透明PNG传送.我遇到的问题是,一旦写入视频,UIImage的透明部分就是黑色.任何人都知道我做错了什么或者可能会忘记?
代码片段如下:
//somewhere in the init of the class;
_eaglContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
_ciContext = [CIContext contextWithEAGLContext:_eaglContext
options: @{ kCIContextWorkingColorSpace : [NSNull null] }];
//samplebufferdelegate method:
- (void) captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection {
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
....
UIImage *logoImage = [UIImage imageNamed:@"logo.png"];
CIImage *renderImage = [[CIImage alloc] initWithCGImage:logoImage.CGImage];
CGColorSpaceRef cSpace = CGColorSpaceCreateDeviceRGB();
[_ciContext render:renderImage
toCVPixelBuffer:pixelBuffer
bounds: [renderImage extent]
colorSpace:cSpace];
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
CGColorSpaceRelease(cSpace);
....
}
Run Code Online (Sandbox Code Playgroud)
看起来CIContext没有绘制CIImages alpha.有任何想法吗?
使用案例:
为了能够记录错误,我们使它们符合特定的协议。这似乎适用于自定义错误,但是,检查或转换 Foundation 错误(例如 URLError)到此协议似乎失败了。
我不知道这里有什么不同。
在此示例中:我们有一个视图模型执行可能导致错误的操作。我们想用我们的记录器记录这个错误。
例子:
// Protocol that makes an error loggable
protocol LoggableError: Error {
var message: String { get }
}
// Our custom error:
enum CustomError: Error {
case someError
}
extension CustomError: LoggableError {
var message: String {
"Some error occurred"
}
}
// URL error conforming to our LoggableError:
extension URLError: LoggableError {
var message: String {
"Some network error occurred"
}
}
// The logger
protocol LoggerProtocol {
func handle(error: some …Run Code Online (Sandbox Code Playgroud)