将GPUImage与AVVideoCompositing一起使用

dhe*_*nke 5 video opengl-es ios gpuimage

我正在尝试使用两种视频的组合GPUImageAVVideoCompositing实现两个视频之间的实时色度键过滤器.这样做天真地利用CIImage imageFromCVPixelBuffer进入CGImageGPUImage出来CGImage,以CIImageCVPixelBuffer,是非常低效的,并导致记忆问题.

我注意到框架中有纹理对象,渲染目标和帧缓冲区GPUImage.我希望能够利用CVOpenGLESTextureCacheCreateTextureFromImageiOS来保持GPU上的所有内容.

我不认为我完全理解框架的内部工作原理,因为我假设我可以在一个GPUImageTextureInput对象上设置一个过滤器链然后得到过滤器renderTarget,这是一个CVPixelBufferRef.在renderTarget下面的永远是零,并呼吁imageFromCurrentFrameBuffer将产生我一个灰色的框,这是不是我的形象.

请注意,下面的示例不是色度键,而是单个视频上的简单亮度滤镜,以尝试和证明该概念.

@implementation MyCustomCompositor : NSObject <AVVideoCompositing>

- (instancetype)init
{
    self = [super init];
    if (self) {
        CVOpenGLESTextureCacheCreate(kCFAllocatorDefault, NULL, [GPUImageContext sharedImageProcessingContext].context, NULL, &_textureCache);
    }
    return self;
}

- (NSDictionary<NSString *,id> *)requiredPixelBufferAttributesForRenderContext
{
    return @{(NSString *)kCVPixelBufferPixelFormatTypeKey : @[@(kCVPixelFormatType_32BGRA)],
             (NSString *)kCVPixelBufferOpenGLCompatibilityKey : @YES};
}

- (NSDictionary<NSString *,id> *)sourcePixelBufferAttributes
{
    return @{(NSString *)kCVPixelBufferPixelFormatTypeKey : @[@(kCVPixelFormatType_32BGRA)],
             (NSString *)kCVPixelBufferOpenGLCompatibilityKey : @YES};
}

- (void)startVideoCompositionRequest:(AVAsynchronousVideoCompositionRequest *)asyncVideoCompositionRequest
{
    @autoreleasepool {
        CVPixelBufferRef mePixelBuffer = [asyncVideoCompositionRequest sourceFrameByTrackID:200];
        CVPixelBufferLockBaseAddress(mePixelBuffer, kCVPixelBufferLock_ReadOnly);

        CVOpenGLESTextureRef meTextureRef = NULL;
        size_t width = CVPixelBufferGetWidth(mePixelBuffer);
        size_t height = CVPixelBufferGetHeight(mePixelBuffer);
        CVOpenGLESTextureCacheCreateTextureFromImage(kCFAllocatorDefault, _textureCache, mePixelBuffer, NULL, GL_TEXTURE_2D, GL_BGRA, (int)width, (int)height, GL_BGRA, GL_UNSIGNED_BYTE, 0, &meTextureRef);

        GPUImageTextureInput *meTextureInput = [[GPUImageTextureInput alloc] initWithTexture:CVOpenGLESTextureGetName(meTextureRef) size:CGSizeMake(width, height)];

        GPUImageBrightnessFilter *filter = [[GPUImageBrightnessFilter alloc] init];
        filter.brightness = 0.5;
        [meTextureInput addTarget:filter];

        [filter setFrameProcessingCompletionBlock:^(GPUImageOutput *imageOutput, CMTime time) {
            [asyncVideoCompositionRequest finishWithComposedVideoFrame:((GPUImageBrightnessFilter *)imageOutput).renderTarget];
        }];

        [meTextureInput processTextureWithFrameTime:kCMTimeZero];

        CFRelease(meTextureRef);
        CVOpenGLESTextureCacheFlush(_textureCache, 0);

        CVPixelBufferUnlockBaseAddress(mePixelBuffer, kCVPixelBufferLock_ReadOnly);
    }
}
Run Code Online (Sandbox Code Playgroud)

我没有使用GPUMovieWriter或视频API,GPUImage因为我需要更细粒度地控制我的构图.该组合可以由多个色度键指令组成,这些指令在不同的时间范围引用不同的绿色视频叠加,并且在我看来,电影API GPUImage仅限于过滤整个视频文件.我还需要合成的能力来操纵音轨和音频混音.

我已经尝试用自定义着色器在GL中完成所有这些操作,但我认为我会利用现有的框架来完成我正在尝试做的事情.

小智 0

我编写了一个从 GPUImageMovie 修改而来的名为 GPUImageFrameInput 的类。CVPixelBufferRef 是其输入。事情是这样的:

  1. 将 sourcePixelBuffer 包装到 GPUImageFrameOutput
  2. 创建包裹了destinationPixelBuffer的GPUImageFrameInput
  3. 完成过滤器链
  4. 完毕

这是关键代码。

// output 
// wrap the sourcePixelBuffer from the request
// it's modified from GPUImageVideoCamera
@interface GPUImageFrameOutput() {

}
-(void)processSourcePixelBuffer:(CVPixelBufferRef)pixelBuffer withSampleTime:(CMTime)currentSampleTime;
@end

@implement GPUImageFrameOutput()

-(void)processSourcePixelBuffer:(CVPixelBufferRef)pixelBuffer withSampleTime:(CMTime)currentSampleTime {
    runSynchronouslyOnVideoProcessingQueue(^{
        [GPUImageContext useImageProcessingContext];

        int bufferHeight = (int) CVPixelBufferGetHeight(movieFrame);
        int bufferWidth = (int) CVPixelBufferGetWidth(movieFrame);

        if (bufferHeight == 0 || bufferWidth == 0) {
            return;
        }

        // almost same as 
        // [GPUImageVideoCamera processVideoSampleBuffer:]
        // 
}

@end

// input 
// wrap the destinationPixelBuffer 
@interface GPUImageFrameInput() {
    CVPixelBufferRef targetBuffer;
    // ... others
}
@end

- (void)setPixelBuffer:(CVPixelBufferRef)buffer{
    targetBuffer = buffer;
}

- (CVOpenGLESTextureRef)createDataFBO {
    if (!movieFramebuffer) {
       glActiveTexture(GL_TEXTURE1);
       glGenFramebuffers(1, &movieFramebuffer);
       glBindFramebuffer(GL_FRAMEBUFFER, movieFramebuffer);
    }

    glBindFramebuffer(GL_FRAMEBUFFER, movieFramebuffer);
    glViewport(0, 0, (int)_videoSize.width, (int)_videoSize.height);

    CVOpenGLESTextureRef renderTexture = nil;

    if ([GPUImageContext supportsFastTextureUpload]) {
        CVBufferSetAttachment(targetBuffer, kCVImageBufferColorPrimariesKey, kCVImageBufferColorPrimaries_ITU_R_709_2, kCVAttachmentMode_ShouldPropagate);
        CVBufferSetAttachment(targetBuffer, kCVImageBufferYCbCrMatrixKey, kCVImageBufferYCbCrMatrix_ITU_R_601_4, kCVAttachmentMode_ShouldPropagate);
        CVBufferSetAttachment(targetBuffer, kCVImageBufferTransferFunctionKey, kCVImageBufferTransferFunction_ITU_R_709_2, kCVAttachmentMode_ShouldPropagate);

        CVOpenGLESTextureCacheCreateTextureFromImage (kCFAllocatorDefault, [[GPUImageContext sharedImageProcessingContext] coreVideoTextureCache],
                                                  targetBuffer,
                                                  NULL, // texture attributes
                                                  GL_TEXTURE_2D,
                                                  GL_RGBA, // opengl format
                                                  (int)CVPixelBufferGetWidth(targetBuffer),
                                                  (int)CVPixelBufferGetHeight(targetBuffer),
                                                  GL_BGRA, // native iOS format
                                                  GL_UNSIGNED_BYTE,
                                                  0,
                                                  &renderTexture);

        glBindTexture(CVOpenGLESTextureGetTarget(renderTexture), CVOpenGLESTextureGetName(renderTexture));
        glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
        glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);

        glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, CVOpenGLESTextureGetName(renderTexture), 0);
    }
    else
    {
     //...
    }
    GLenum status = glCheckFramebufferStatus(GL_FRAMEBUFFER);
    NSAssert(status == GL_FRAMEBUFFER_COMPLETE, @"Incomplete filter FBO: %d", status);

    return renderTexture;
}
Run Code Online (Sandbox Code Playgroud)

然后你可以像其他人一样创建gpuimage链

[frameInput setPixelBuffer:destinationPixelBuffer];

for (...) {
    GPUImageMovieFrameOutput *output ...
    [output addTarget:filter atTextureLocation:index];
}
[filter addTarget:frameInput];
Run Code Online (Sandbox Code Playgroud)

希望有帮助!