为什么我的基于QTKit的图像编码应用程序如此之慢?

mbe*_*ben 7 macos cocoa qtkit quartz-composer

在我正在编写的cocoa应用程序中,我从Quartz Composer渲染器(NSImage对象)获取快照图像,我想使用addImage在720*480大小,25 fps和H264编解码器的QTMovie中对它们进行编码: 方法.这是相应的代码片段:

qRenderer = [[QCRenderer alloc] initOffScreenWithSize:NSMakeSize(720,480) colorSpace:CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB) composition:[QCComposition compositionWithFile:qcPatchPath]]; // define an "offscreen" Quartz composition renderer with the right image size


imageAttrs = [NSDictionary dictionaryWithObjectsAndKeys: @"avc1", // use the H264 codec
              QTAddImageCodecType, nil];

qtMovie = [[QTMovie alloc] initToWritableFile: outputVideoFile error:NULL]; // initialize the output QT movie object

long fps = 25;
frameNum = 0;

NSTimeInterval renderingTime = 0;
NSTimeInterval frameInc = (1./fps);
NSTimeInterval myMovieDuration = 70;
NSImage * myImage;
while (renderingTime <= myMovieDuration){
    if(![qRenderer renderAtTime: renderingTime arguments:NULL])
        NSLog(@"Rendering failed at time %.3fs", renderingTime);
    myImage = [qRenderer snapshotImage];
    [qtMovie addImage:myImage forDuration: QTMakeTimeWithTimeInterval(frameInc) withAttributes:imageAttrs];
    [myImage release];
    frameNum ++;
    renderingTime = frameNum * frameInc;
}
[qtMovie updateMovieFile];
[qRenderer release];
[qtMovie release]; 
Run Code Online (Sandbox Code Playgroud)

它可以工作,但我的应用程序无法在我的新MacBook Pro上实时执行此操作,而我知道QuickTime Broadcaster可以在H264中实时编码图像,其质量甚至比我在同一台计算机上使用的图像更高.

所以为什么 ?这是什么问题?这是硬件管理问题(多核线程,GPU,......)还是我错过了什么?让我先说一下,我是Apple开发领域的新手(2周练习),包括Objective-C,cocoa,X-code,Quicktime和Quartz Composer库等.

谢谢你的帮助

bda*_*ash 5

AVFoundation是一种将QuartzComposer动画渲染为H.264视频流的更有效方法.


size_t width = 640;
size_t height = 480;

const char *outputFile = "/tmp/Arabesque.mp4";

QCComposition *composition = [QCComposition compositionWithFile:@"/System/Library/Screen Savers/Arabesque.qtz"];
QCRenderer *renderer = [[QCRenderer alloc] initOffScreenWithSize:NSMakeSize(width, height)
                                                      colorSpace:CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB) composition:composition];

unlink(outputFile);
AVAssetWriter *videoWriter = [[AVAssetWriter alloc] initWithURL:[NSURL fileURLWithPath:@(outputFile)] fileType:AVFileTypeMPEG4 error:NULL];

NSDictionary *videoSettings = @{ AVVideoCodecKey : AVVideoCodecH264, AVVideoWidthKey : @(width), AVVideoHeightKey : @(height) };
AVAssetWriterInput* writerInput = [[AVAssetWriterInput alloc] initWithMediaType:AVMediaTypeVideo outputSettings:videoSettings];

[videoWriter addInput:writerInput];
[writerInput release];

AVAssetWriterInputPixelBufferAdaptor *pixelBufferAdaptor = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:writerInput sourcePixelBufferAttributes:NULL];

int framesPerSecond = 30;
int totalDuration = 30;
int totalFrameCount = framesPerSecond * totalDuration;

[videoWriter startWriting];
[videoWriter startSessionAtSourceTime:kCMTimeZero];

__block long frameNumber = 0;

dispatch_queue_t workQueue = dispatch_queue_create("com.example.work-queue", DISPATCH_QUEUE_SERIAL);

NSLog(@"Starting.");
[writerInput requestMediaDataWhenReadyOnQueue:workQueue usingBlock:^{
    while ([writerInput isReadyForMoreMediaData]) {
        NSTimeInterval frameTime = (float)frameNumber / framesPerSecond;
        if (![renderer renderAtTime:frameTime arguments:NULL]) {
            NSLog(@"Rendering failed at time %.3fs", frameTime);
            break;
        }

        CVPixelBufferRef frame = (CVPixelBufferRef)[renderer createSnapshotImageOfType:@"CVPixelBuffer"];
        [pixelBufferAdaptor appendPixelBuffer:frame withPresentationTime:CMTimeMake(frameNumber, framesPerSecond)];
        CFRelease(frame);

        frameNumber++;
        if (frameNumber >= totalFrameCount) {
            [writerInput markAsFinished];
            [videoWriter finishWriting];
            [videoWriter release];
            [renderer release];
            NSLog(@"Rendered %ld frames.", frameNumber);
            break;
        }

    }
}];
Run Code Online (Sandbox Code Playgroud)

在我的测试中,这大约是使用QTKit的发布代码的两倍.最大的改进似乎来自于将H.264编码传递给GPU而不是在软件中执行.从快速浏览一下配置文件看,剩下的瓶颈似乎是组合物本身的渲染,并将渲染后的数据从GPU读回到像素缓冲区.显然,你的作文的复杂性会对此产生一些影响.

可以通过使用QCRenderer提供快照的能力来进一步优化这一点CVOpenGLBufferRef,这可以将帧的数据保留在GPU上而不是将其读回以将其传递给编码器.虽然我对此并不太了解.