AVFoundation 从错误的帧速率图像创建视频

aum*_*ets 1 frame-rate video-processing avfoundation avassetwriter

我正在尝试使用 AVFoundation 从图像创建视频。关于这种方法已经有多个线程,但我相信它们中的许多都与我在这里面临的问题相同。

视频在 iPhone 上可以正常播放,但不能在 VLC 上播放,例如在 Facebook 和 Vimeo 上也不能正常播放(有时某些帧不同步)。VLC 说视频的帧速率是 0.58 fps,但它应该超过 24 对吧?

有谁知道是什么导致了这种行为?

这是用于创建视频的代码:

self.videoWriter = [[AVAssetWriter alloc] initWithURL:[NSURL fileURLWithPath:videoOutputPath] fileType:AVFileTypeMPEG4 error:&error];
    // Codec compression settings
    NSDictionary *videoSettings = @{
                                    AVVideoCodecKey : AVVideoCodecH264,
                                    AVVideoWidthKey : @(self.videoSize.width),
                                    AVVideoHeightKey : @(self.videoSize.height),
                                    AVVideoCompressionPropertiesKey : @{
                                            AVVideoAverageBitRateKey : @(20000*1000), // 20 000 kbits/s
                                            AVVideoProfileLevelKey : AVVideoProfileLevelH264High40,
                                            AVVideoMaxKeyFrameIntervalKey : @(1)
                                            }
                                    };

    AVAssetWriterInput* videoWriterInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:videoSettings];

    AVAssetWriterInputPixelBufferAdaptor *adaptor = [AVAssetWriterInputPixelBufferAdaptor
                                                     assetWriterInputPixelBufferAdaptorWithAssetWriterInput:videoWriterInput
                                                     sourcePixelBufferAttributes:nil];

    videoWriterInput.expectsMediaDataInRealTime = NO;
    [self.videoWriter addInput:videoWriterInput];
    [self.videoWriter startWriting];
    [self.videoWriter startSessionAtSourceTime:kCMTimeZero];

    [adaptor.assetWriterInput requestMediaDataWhenReadyOnQueue:self.photoToVideoQueue usingBlock:^{
        CMTime time = CMTimeMakeWithSeconds(0, 1000);

        for (Segment* segment in segments) {
            @autoreleasepool {
                UIImage* image = segment.segmentImage;
                CVPixelBufferRef buffer = [self pixelBufferFromImage:image withImageSize:self.videoSize];
                [ImageToVideoManager appendToAdapter:adaptor pixelBuffer:buffer atTime:time];
                CVPixelBufferRelease(buffer);

                CMTime millisecondsDuration = CMTimeMake(segment.durationMS.integerValue, 1000);
                time = CMTimeAdd(time, millisecondsDuration);
            }
        }
        [videoWriterInput markAsFinished];
        [self.videoWriter endSessionAtSourceTime:time];
        [self.videoWriter finishWritingWithCompletionHandler:^{
            NSLog(@"Video writer has finished creating video");
        }];
    }];

- (CVPixelBufferRef)pixelBufferFromImage:(UIImage*)image withImageSize:(CGSize)size{
    CGImageRef cgImage = image.CGImage;
    NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
                             [NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
                             [NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
                             nil];
    CVPixelBufferRef pxbuffer = NULL;

    CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault,
                                          size.width,
                                          size.height,
                                          kCVPixelFormatType_32ARGB,
                                          (__bridge CFDictionaryRef) options,
                                          &pxbuffer);
    if (status != kCVReturnSuccess){
        DebugLog(@"Failed to create pixel buffer");
    }

    CVPixelBufferLockBaseAddress(pxbuffer, 0);
    void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);

    CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
    CGContextRef context = CGBitmapContextCreate(pxdata, size.width, size.height, 8, 4*size.width, rgbColorSpace, 2);
    CGContextConcatCTM(context, CGAffineTransformMakeRotation(0));
    CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(cgImage), CGImageGetHeight(cgImage)), cgImage);
    CGColorSpaceRelease(rgbColorSpace);
    CGContextRelease(context);

    CVPixelBufferUnlockBaseAddress(pxbuffer, 0);

    return pxbuffer;
}

+ (BOOL)appendToAdapter:(AVAssetWriterInputPixelBufferAdaptor*)adaptor
            pixelBuffer:(CVPixelBufferRef)buffer
                 atTime:(CMTime)time{
    while (!adaptor.assetWriterInput.readyForMoreMediaData) {
        [[NSRunLoop currentRunLoop] runUntilDate:[NSDate dateWithTimeIntervalSinceNow:0.1]];
    }
    return [adaptor appendPixelBuffer:buffer withPresentationTime:time];
}
Run Code Online (Sandbox Code Playgroud)

Tim*_*ull 5

查看代码,我认为问题在于您使用时间戳的方式......

CMTime 由 Value 和 Timescale 组成。我认为这一点的方式是将时间刻度部分本质上视为帧速率(这是不准确的,但我认为这是一种有用的心理工具,对于您尝试做的事情来说效果很好)。

30FPS 的第一帧视频将是:

CMTimeMake(1, 30);
Run Code Online (Sandbox Code Playgroud)

或者每秒 30 帧的第 60 帧,巧合的是,这也是(60 除以 30)视频的 2 秒点。

CMTimeMake(60, 30); 
Run Code Online (Sandbox Code Playgroud)

您指定 1000 作为时间刻度,这比您需要的要高得多。在循环中,您似乎正在放置框架,然后添加第二个框架并放置另一个框架。这就是让您获得 0.58 FPS 的原因……(虽然我原以为 1 FPS,但谁知道编解码器的复杂性)。

相反,您想要做的是循环 30 次(如果您希望图像显示 1 秒/30 帧),并在每一帧上放置相同的图像。这应该让你达到 30 FPS。当然,如果你想要 24FPS,你可以使用 24 的时间尺度,无论你的要求是什么。

尝试重新编写这部分代码:

[adaptor.assetWriterInput requestMediaDataWhenReadyOnQueue:self.photoToVideoQueue usingBlock:^{
    CMTime time = CMTimeMakeWithSeconds(0, 1000);

    for (Segment* segment in segments) {
        @autoreleasepool {
            UIImage* image = segment.segmentImage;
            CVPixelBufferRef buffer = [self pixelBufferFromImage:image withImageSize:self.videoSize];
            [ImageToVideoManager appendToAdapter:adaptor pixelBuffer:buffer atTime:time];
            CVPixelBufferRelease(buffer);

            CMTime millisecondsDuration = CMTimeMake(segment.durationMS.integerValue, 1000);
            time = CMTimeAdd(time, millisecondsDuration);
        }
    }
    [videoWriterInput markAsFinished];
    [self.videoWriter endSessionAtSourceTime:time];
    [self.videoWriter finishWritingWithCompletionHandler:^{
        NSLog(@"Video writer has finished creating video");
    }];
}];
Run Code Online (Sandbox Code Playgroud)

更像是这样的:

[adaptor.assetWriterInput requestMediaDataWhenReadyOnQueue:self.photoToVideoQueue usingBlock:^{
    // Let's start at the first frame with a timescale of 30 FPS
    CMTime time = CMTimeMake(1, 30);

    for (Segment* segment in segments) {
        @autoreleasepool {
            UIImage* image = segment.segmentImage;
            CVPixelBufferRef buffer = [self pixelBufferFromImage:image withImageSize:self.videoSize];
            for (int i = 1; i <= 30; i++) {
                [ImageToVideoManager appendToAdapter:adaptor pixelBuffer:buffer atTime:time];
                time = CMTimeAdd(time, CMTimeMake(1, 30)); // Add another "frame"
            }
            CVPixelBufferRelease(buffer);

        }
    }
    [videoWriterInput markAsFinished];
    [self.videoWriter endSessionAtSourceTime:time];
    [self.videoWriter finishWritingWithCompletionHandler:^{
        NSLog(@"Video writer has finished creating video");
    }];
}];
Run Code Online (Sandbox Code Playgroud)