从原始PCM流使用CMSampleTimingInfo,CMSampleBuffer和AudioBufferList

Tim*_*Tim 5 objective-c core-audio ios webrtc

我从Google的WebRTC C++参考实现(插入一个钩子VoEBaseImpl::GetPlayoutData)接收原始PCM流.音频似乎是线性PCM,签名为int16,但是当使用AssetWriter录制时,它会将音频文件保存为高度失真和高音调.

我假设这是一个输入参数的错误,很可能是关于将stereo-int16转换为AudioBufferList然后转换为CMSampleBuffer.以下代码有什么问题吗?

void RecorderImpl::RenderAudioFrame(void* audio_data, size_t number_of_frames, int sample_rate, int64_t elapsed_time_ms, int64_t ntp_time_ms) {
    OSStatus status;

    AudioChannelLayout acl;
    bzero(&acl, sizeof(acl));
    acl.mChannelLayoutTag = kAudioChannelLayoutTag_Stereo;

    AudioStreamBasicDescription audioFormat;
    audioFormat.mSampleRate = sample_rate;
    audioFormat.mFormatID = kAudioFormatLinearPCM;
    audioFormat.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked;
    audioFormat.mFramesPerPacket = 1;
    audioFormat.mChannelsPerFrame = 2;
    audioFormat.mBitsPerChannel = 16;
    audioFormat.mBytesPerPacket = audioFormat.mFramesPerPacket * audioFormat.mChannelsPerFrame * audioFormat.mBitsPerChannel / 8;
    audioFormat.mBytesPerFrame = audioFormat.mBytesPerPacket / audioFormat.mFramesPerPacket;

    CMSampleTimingInfo timing = { CMTimeMake(1, sample_rate), CMTimeMake(elapsed_time_ms, 1000), kCMTimeInvalid };

    CMFormatDescriptionRef format = NULL;
    status = CMAudioFormatDescriptionCreate(kCFAllocatorDefault, &audioFormat, sizeof(acl), &acl, 0, NULL, NULL, &format);
    if(status != 0) {
        NSLog(@"Failed to create audio format description");
        return;
    }

    CMSampleBufferRef buffer;
    status = CMSampleBufferCreate(kCFAllocatorDefault, NULL, false, NULL, NULL, format, (CMItemCount)number_of_frames, 1, &timing, 0, NULL, &buffer);
    if(status != 0) {
        NSLog(@"Failed to allocate sample buffer");
        return;
    }

    AudioBufferList bufferList;
    bufferList.mNumberBuffers = 1;
    bufferList.mBuffers[0].mNumberChannels = audioFormat.mChannelsPerFrame;
    bufferList.mBuffers[0].mDataByteSize = (UInt32)(number_of_frames * audioFormat.mBytesPerFrame);
    bufferList.mBuffers[0].mData = audio_data;
    status = CMSampleBufferSetDataBufferFromAudioBufferList(buffer, kCFAllocatorDefault, kCFAllocatorDefault, 0, &bufferList);
    if(status != 0) {
        NSLog(@"Failed to convert audio buffer list into sample buffer");
        return;
    }

    [recorder writeAudioFrames:buffer];

    CFRelease(buffer);
}
Run Code Online (Sandbox Code Playgroud)

作为参考,我在iPhone 6S +/iOS 9.2上从WebRTC收到的采样率为48kHz,每次调用此挂钩有480个采样,我每10 ms接收一次数据.

Rhy*_*man 5

首先,恭喜您能CMSampleBuffer从头开始创建音频 。对于大多数人来说,它们既没有创造也没有销毁,而是从CoreMedia和流传下来的纯洁而神秘AVFoundation

presentationTimeStamp时序信息中的s以整数毫秒为单位,不能表示48kHz采样在时间上的位置。

代替CMTimeMake(elapsed_time_ms, 1000),尝试CMTimeMake(elapsed_frames, sample_rate),哪里elapsed_frames是您先前写入的帧数。

那可以解释失真,但不能解释音高,因此请确保AudioStreamBasicDescription匹配AVAssetWriterInput设置。不看AVAssetWriter代码很难说。

ps注意writeAudioFrames-如果它是异步的,则您将对拥有所有权有疑问audio_data

pps,看来您正在泄漏CMFormatDescriptionRef


Tim*_*Tim 2

我最终打开了 Audacity 中生成的音频文件,发现每一帧都有一半被丢弃,如这个看起来相当奇怪的波形所示:

修复前

更改acl.mChannelLayoutTagkAudioChannelLayoutTag_Mono更改audioFormat.mChannelsPerFrame解决1了问题,现在音频质量是完美的。万岁!