WebRTC本机,已添加AudioTrackSinkInterface进行跟踪,但从未调用OnData

Sam*_*der 5 c++ webrtc

我一直在研究使用WebRTC在浏览器和本机客户端之间交换音频的产品,本机端是用C ++实现的。目前,我已经构建了webRtc的最新稳定版本(分支:)branch-heads/65

到目前为止,我已经能够让连接对等体进行连接,可以在浏览器上正确接收并呈现音频。但是,尽管chrome调试工具建议将数据从浏览器发送到本机客户端,但本机客户端似乎从未通过其音频轨道接收器接收任何数据。

绝对调用以下代码,并按预期添加通道。

void Conductor::OnAddStream(rtc::scoped_refptr<webrtc::MediaStreamInterface> stream)
{

    webrtc::AudioTrackVector atracks = stream->GetAudioTracks();
    for (auto track : atracks)
    {
        remote_audio.reset(new Native::AudioRenderer(this, track));
        track->set_enabled(true);
    }
}

// Audio renderer derived from webrtc::AudioTrackSinkInterface
// In the audio renderer constructor, AddSink is called on the track.
AudioRenderer::AudioRenderer(AudioCallback* callback, webrtc::AudioTrackInterface* track) : track_(track), callback_(callback)
{
// Can confirm this point is reached.
    track_->AddSink(this);
}

AudioRenderer::~AudioRenderer()
{
    track_->RemoveSink(this);
}

void AudioRenderer::OnData(const void* audio_data, int bits_per_sample, int sample_rate, size_t number_of_channels,
        size_t number_of_frames)
{
// This is never hit, despite the connection starting and streams being added.
    if (callback_ != nullptr)
    {
        callback_->OnAudioData(audio_data, bits_per_sample, sample_rate, number_of_channels, number_of_frames);
    }
}
Run Code Online (Sandbox Code Playgroud)

我还可以确认两个提议都包括接收音频的选项:

浏览器客户端提供:

// Create offer
var offerOptions = {
    offerToReceiveAudio: 1,
    offerToReceiveVideo: 0
};
pc.createOffer(offerOptions)
    .then(offerCreated);
Run Code Online (Sandbox Code Playgroud)

本机客户端答案:

webrtc::PeerConnectionInterface::RTCOfferAnswerOptions o;
{
    o.voice_activity_detection = false;
    o.offer_to_receive_audio = webrtc::PeerConnectionInterface::RTCOfferAnswerOptions::kOfferToReceiveMediaTrue;
    o.offer_to_receive_video = webrtc::PeerConnectionInterface::RTCOfferAnswerOptions::kOfferToReceiveMediaTrue;
}
peer_connection_->CreateAnswer(this, o);
Run Code Online (Sandbox Code Playgroud)

我找不到有关此问题的最新消息,并且似乎能够在客户端应用程序中使用接收到的音频是该框架的常见用例。关于在听入站音频时可能会犯错误的地方有什么想法,或者关于如何调查为什么行不通的策略?

非常感谢

Sam*_*der 5

我设法找到了一种从 WebRTC 获取音频数据的替代方法,可以解决这个问题。

  1. 实现自定义webrtc::AudioDeviceModule实现。查看 webrtc 源代码以了解如何执行此操作。
  2. RegisterAudioCallback方法中捕获音频传输,在建立呼叫时调用该方法。

片段:

int32_t AudioDevice::RegisterAudioCallback(webrtc::AudioTransport * transport)
{
    transport_ = transport;
    return 0;
}
Run Code Online (Sandbox Code Playgroud)
  1. 向设备类添加自定义方法,以使用该NeedMorePlayData方法从音频传输中提取音频。(注意:这似乎适用于作为 0 传入的 ntp_time_ms,似乎不是必需的)。

片段:

int32_t AudioDevice::NeedMorePlayData(
    const size_t nSamples,
    const size_t nBytesPerSample,
    const size_t nChannels,
    const uint32_t samplesPerSec,
    void* audioSamples,
    size_t& nSamplesOut,
    int64_t* elapsed_time_ms,
    int64_t* ntp_time_ms) const
    {
        return transport_->NeedMorePlayData(nSamples,
            nBytesPerSample,
            nChannels,
            samplesPerSec,
            audioSamples,
            nSamplesOut,
            elapsed_time_ms,
            ntp_time_ms);
    }
Run Code Online (Sandbox Code Playgroud)