使用 AVSampleBufferAudioRenderer 播放流式 PCM 音频数据包(从 Opus 解码)

Jas*_*son 4 objective-c core-audio avfoundation audiotoolbox ios

编辑:根据建议更新了代码,修复了 ASBD 并再次尝试使 PTS 正确。它仍然不播放任何音频,但至少不再有错误。


我正在开发一个 iOS 项目,在该项目中我接收 Opus 音频数据包并尝试使用 AVSampleBufferAudioRenderer 播放它们。现在我使用的是 Opus 自己的解码器,所以最终我只需要获取解码后的 PCM 数据包即可播放。从上到下的整个过程并没有很好的记录,但我想我已经很接近了。这是迄今为止我正在使用的代码(已进行编辑,并为简单起见使用了一些硬编码值)。

static AVSampleBufferAudioRenderer* audioRenderer;
static AVSampleBufferRenderSynchronizer* renderSynchronizer;

int samplesPerFrame = 240;
int channelCount    = 2;
int sampleRate      = 48000;
int streams         = 1;
int coupledStreams  = 1;
char mapping[8] = ['\0','\x01','\0','\0','\0','\0','\0','\0'];

CMTime startPTS;

// called when the stream is about to start
void AudioInit()
{
    renderSynchronizer = [[AVSampleBufferRenderSynchronizer alloc] init];
    audioRenderer = [[AVSampleBufferAudioRenderer alloc] init];
    [renderSynchronizer addRenderer:audioRenderer];
    
    int decodedPacketSize = samplesPerFrame * sizeof(short) * channelCount; // 240 samples per frame * 2 channels
    decodedPacketBuffer = SDL_malloc(decodedPacketSize);
    
    int err;
    opusDecoder = opus_multistream_decoder_create(sampleRate,       // 48000
                                                  channelCount,     // 2
                                                  streams,          // 1
                                                  coupledStreams,   // 1
                                                  mapping,
                                                  &err);

    [renderSynchronizer setRate:1.0 time:kCMTimeZero atHostTime:CMClockGetTime(CMClockGetHostTimeClock())];
    startPTS = CMClockGetTime(CMClockGetHostTimeClock());
}

// called every X milliseconds with a new packet of audio data to play, IF there's audio. (while testing, X = 5)
void AudioDecodeAndPlaySample(char* sampleData, int sampleLength)
{
    // decode the packet from Opus to (I think??) Linear PCM
    int numSamples;
    numSamples = opus_multistream_decode(opusDecoder,
                                         (unsigned char *)sampleData,
                                         sampleLength,
                                         (short*)decodedPacketBuffer,
                                         samplesPerFrame, // 240
                                         0);

    int bufferSize = sizeof(short) * numSamples * channelCount; // 240 samples * 2 channels

    CMTime currentPTS = CMTimeSubtract(CMClockGetTime(CMClockGetHostTimeClock()), startPTS);

    // LPCM stream description
    AudioStreamBasicDescription asbd = {
        .mFormatID          = kAudioFormatLinearPCM,
        .mFormatFlags       = kLinearPCMFormatFlagIsSignedInteger,
        .mBytesPerPacket    = sizeof(short) * channelCount,
        .mFramesPerPacket   = 1,
        .mBytesPerFrame     = sizeof(short) * channelCount,
        .mChannelsPerFrame  = channelCount, // 2
        .mBitsPerChannel    = 16,
        .mSampleRate        = sampleRate // 48000,
        .mReserved          = 0
    };
    
    // audio format description wrapper around asbd
    CMAudioFormatDescriptionRef audioFormatDesc;
    OSStatus status = CMAudioFormatDescriptionCreate(kCFAllocatorDefault,
                                                     &asbd,
                                                     0,
                                                     NULL,
                                                     0,
                                                     NULL,
                                                     NULL,
                                                     &audioFormatDesc);
    
    // data block to store decoded packet into
    CMBlockBufferRef blockBuffer;
    status = CMBlockBufferCreateWithMemoryBlock(kCFAllocatorDefault,
                                                decodedPacketBuffer,
                                                bufferSize,
                                                kCFAllocatorNull,
                                                NULL,
                                                0,
                                                bufferSize,
                                                0,
                                                &blockBuffer);
    
    // data block converted into a sample buffer
    CMSampleBufferRef sampleBuffer;
    status = CMAudioSampleBufferCreateReadyWithPacketDescriptions(kCFAllocatorDefault,
                                                                  blockBuffer,
                                                                  audioFormatDesc,
                                                                  numSamples,
                                                                  currentPTS,
                                                                  NULL,
                                                                  &sampleBuffer);
    
    
    // queueing sample buffer onto audio renderer
    [audioRenderer enqueueSampleBuffer:sampleBuffer];
}
Run Code Online (Sandbox Code Playgroud)

AudioDecodeAndPlaySample函数来自我正在使用的库,正如评论所说,一次使用大约 5 毫秒的样本数据包进行调用(并且,需要注意的是,如果没有静默,则不会被调用)

这里有很多地方我可能是错的 - 我认为 opus 解码器(此处的文档)解码为线性 PCM(交错)是正确的,我希望我正在AudioStreamBasicDescription正确构建。我绝对不知道如何处理 PTS(演示时间戳)CMAudioSampleBufferCreateReadyWithPacketDescriptions- 我正在尝试根据 提出一个时间current host time - init host time,但我不知道这是否有效。

我见过的大多数代码示例都enqueueSampleBuffer将其包装在requestMediaDataWhenReady调度队列中,我也尝试过但无济于事。(我怀疑这对于功能来说是更好的实践,而不是必要的,所以我只是试图首先让最简单的案例工作;但如果这是必要的,我可以把它放回去。)

如果您对 Swift 更满意,请随时使用 Swift 进行回复,我可以使用其中任何一个。(无论喜欢与否,我都坚持使用 Objective-C。)

Rhy*_*man 5

恭喜您找到了我认为比较晦涩难懂的 Apple 音频播放 API 之一!

正如 MeLean 正确指出的那样,您的样本时间戳没有进展(您确实需要它们)。
除此之外,这AudioStreamBasicDescription是错误的,您没有提供时间戳时间线和主机时间时间线之间的同步器映射。

固定ASBD:

// In uncompressed audio, a Packet is one frame, (mFramesPerPacket == 1).

// LPCM stream description
AudioStreamBasicDescription asbd = {
    .mFormatID          = kAudioFormatLinearPCM,
    .mFormatFlags       = kLinearPCMFormatFlagIsSignedInteger,
    .mBytesPerPacket    = sizeof(short) * channelCount,
    .mFramesPerPacket   = 1,
    .mBytesPerFrame     = sizeof(short) * channelCount,
    .mChannelsPerFrame  = channelCount, // 2
    .mBitsPerChannel    = 16,
    .mSampleRate        = sampleRate, // 48000
    .mReserved          = 0
};
Run Code Online (Sandbox Code Playgroud)

一种可能的时间线映射(又称尽快播放,后果可恶):

[renderSynchronizer setRate:1.0 time:kCMTimeZero atHostTime:CMClockGetTime(CMClockGetHostTimeClock())];
Run Code Online (Sandbox Code Playgroud)

进度时间戳:

// with your other variables
uint64_t samplesEnqueued = 0;

// ...

// data block converted into a sample buffer
CMSampleBufferRef sampleBuffer;
status = CMAudioSampleBufferCreateReadyWithPacketDescriptions(kCFAllocatorDefault,
                                                              blockBuffer,
                                                              audioFormatDesc,
                                                              numSamples,
                                                              CMTimeMake(samplesEnqueued, sampleRate),
                                                              NULL,
                                                              &sampleBuffer);


samplesEnqueued += numSamples;

// queueing sample buffer onto audio renderer
[audioRenderer enqueueSampleBuffer:sampleBuffer];

// ...
Run Code Online (Sandbox Code Playgroud)

当您向渲染器提供数据时,您有自己的要求,但 API 头文件中的代码片段会调用您。你可能可以忽略这一点:

[audioRenderer requestMediaDataWhenReadyOnQueue:dispatch_get_main_queue() usingBlock:^{
    AudioDecodeAndPlaySample(sampleData, sampleLength);
    // get more sampleData
}];
Run Code Online (Sandbox Code Playgroud)

ps 您的使用SDL_malloc表明您可能在游戏中使用此代码。上次我使用AVSampleBufferAudioRendererIIRC 时,它的延迟并不令人印象深刻,但我可能一直认为它是错误的。如果需要低延迟,您可能需要重新考虑您的设计。

pps Silence => 无回调意味着您必须调整时间戳以解决丢失的静音帧