正确计算PTS和DTS以同步音频和视频ffmpeg C++

Kai*_*dul 5 c++ audio video ffmpeg

我正在尝试将 H264 编码数据和 G711 PCM 数据混合到mov多媒体容器中。我AVPacket从编码数据创建,最初视频/音频帧的 PTS 和 DTS 值等于AV_NOPTS_VALUE. 所以我使用当前时间信息计算了DTS。我的代码 -

\n\n
bool AudioVideoRecorder::WriteVideo(const unsigned char *pData, size_t iDataSize, bool const bIFrame) {\n    .....................................\n    .....................................\n    .....................................\n    AVPacket pkt = {0};\n    av_init_packet(&pkt);\n    int64_t dts = av_gettime();\n    dts = av_rescale_q(dts, (AVRational){1, 1000000}, m_pVideoStream->time_base);\n    int duration = 90000 / VIDEO_FRAME_RATE;\n    if(m_prevVideoDts > 0LL) {\n        duration = dts - m_prevVideoDts;\n    }\n    m_prevVideoDts = dts;\n\n    pkt.pts = AV_NOPTS_VALUE;\n    pkt.dts = m_currVideoDts;\n    m_currVideoDts += duration;\n    pkt.duration = duration;\n    if(bIFrame) {\n        pkt.flags |= AV_PKT_FLAG_KEY;\n    }\n    pkt.stream_index = m_pVideoStream->index;\n    pkt.data = (uint8_t*) pData;\n    pkt.size = iDataSize;\n\n    int ret = av_interleaved_write_frame(m_pFormatCtx, &pkt);\n\n    if(ret < 0) {\n        LogErr("Writing video frame failed.");\n        return false;\n    }\n\n    Log("Writing video frame done.");\n\n    av_free_packet(&pkt);\n    return true;\n}\n\nbool AudioVideoRecorder::WriteAudio(const unsigned char *pEncodedData, size_t iDataSize) {\n    .................................\n    .................................\n    .................................\n    AVPacket pkt = {0};\n    av_init_packet(&pkt);\n\n    int64_t dts = av_gettime();\n    dts = av_rescale_q(dts, (AVRational){1, 1000000}, (AVRational){1, 90000});\n    int duration = AUDIO_STREAM_DURATION; // 20\n    if(m_prevAudioDts > 0LL) {\n        duration = dts - m_prevAudioDts;\n    }\n    m_prevAudioDts = dts;\n    pkt.pts = AV_NOPTS_VALUE;\n    pkt.dts = m_currAudioDts;\n    m_currAudioDts += duration;\n    pkt.duration = duration;\n\n    pkt.stream_index = m_pAudioStream->index;\n    pkt.flags |= AV_PKT_FLAG_KEY;\n    pkt.data = (uint8_t*) pEncodedData;\n    pkt.size = iDataSize;\n\n    int ret = av_interleaved_write_frame(m_pFormatCtx, &pkt);\n    if(ret < 0) {\n        LogErr("Writing audio frame failed: %d", ret);\n        return false;\n    }\n\n    Log("Writing audio frame done.");\n\n    av_free_packet(&pkt);\n    return true;\n}\n
Run Code Online (Sandbox Code Playgroud)\n\n

我添加了这样的流 -

\n\n
AVStream* AudioVideoRecorder::AddMediaStream(enum AVCodecID codecID) {\n    ................................\n    .................................   \n    pStream = avformat_new_stream(m_pFormatCtx, codec);\n    if (!pStream) {\n        LogErr("Could not allocate stream.");\n        return NULL;\n    }\n    pStream->id = m_pFormatCtx->nb_streams - 1;\n    pCodecCtx = pStream->codec;\n    pCodecCtx->codec_id = codecID;\n\n    switch(codec->type) {\n    case AVMEDIA_TYPE_VIDEO:\n        pCodecCtx->bit_rate = VIDEO_BIT_RATE;\n        pCodecCtx->width = PICTURE_WIDTH;\n        pCodecCtx->height = PICTURE_HEIGHT;\n        pStream->time_base = (AVRational){1, 90000};\n        pStream->avg_frame_rate = (AVRational){90000, 1};\n        pStream->r_frame_rate = (AVRational){90000, 1}; // though the frame rate is variable and around 15 fps\n        pCodecCtx->pix_fmt = STREAM_PIX_FMT;\n        m_pVideoStream = pStream;\n        break;\n\n    case AVMEDIA_TYPE_AUDIO:\n        pCodecCtx->sample_fmt = AV_SAMPLE_FMT_S16;\n        pCodecCtx->bit_rate = AUDIO_BIT_RATE;\n        pCodecCtx->sample_rate = AUDIO_SAMPLE_RATE;\n        pCodecCtx->channels = 1;\n        m_pAudioStream = pStream;\n        break;\n\n    default:\n        break;\n    }\n\n    /* Some formats want stream headers to be separate. */\n    if (m_pOutputFmt->flags & AVFMT_GLOBALHEADER)\n        m_pFormatCtx->flags |= CODEC_FLAG_GLOBAL_HEADER;\n\n    return pStream;\n}\n
Run Code Online (Sandbox Code Playgroud)\n\n

这个计算有几个问题:

\n\n
    \n
  1. 随着时间的推移,视频比音频滞后并且越来越滞后。

  2. \n
  3. 假设,最近 ( ) 接收到一个音频帧(WriteAudio(..)例如 3 秒),那么较晚的帧应该以 3 秒的延迟开始播放,但事实并非如此。延迟的帧与前一帧连续播放。

  4. \n
  5. 有时我录制了约 40 秒,但文件持续时间很像 2 分钟,但音频/视频只播放了几分钟,例如 40 秒,文件的其余部分不包含任何内容,并且搜索栏在 40 秒后立即跳转到 en(在 VLC 中测试) 。

  6. \n
\n\n

编辑:

\n\n

根据Ronald S. Bultje的建议,我的理解是:

\n\n
m_pAudioStream->time_base = (AVRational){1, 9000}; // actually no need to set as 9000 is already default value for audio as you said\nm_pVideoStream->time_base = (AVRational){1, 9000};\n
Run Code Online (Sandbox Code Playgroud)\n\n

应设置为现在音频和视频流都采用相同的时基单位。

\n\n

对于视频:

\n\n
...................\n...................\n\nint64_t dts = av_gettime(); // get current time in microseconds\ndts *= 9000; \ndts /= 1000000; // 1 second = 10^6 microseconds\npkt.pts = AV_NOPTS_VALUE; // is it okay?\npkt.dts = dts;\n// and no need to set pkt.duration, right?\n
Run Code Online (Sandbox Code Playgroud)\n\n

对于音频:(与视频完全相同,对吧?)

\n\n
...................\n...................\n\nint64_t dts = av_gettime(); // get current time in microseconds\ndts *= 9000; \ndts /= 1000000; // 1 second = 10^6 microseconds\npkt.pts = AV_NOPTS_VALUE; // is it okay?\npkt.dts = dts;\n// and no need to set pkt.duration, right?\n
Run Code Online (Sandbox Code Playgroud)\n\n

我认为他们现在喜欢分享相同的东西currDts,对吗?如果我有任何错误或遗漏的地方,请纠正我。

\n\n

另外,如果我想使用视频流时基 as(AVRational){1, frameRate}和音频流时基 as (AVRational){1, sampleRate},正确的代码应该是什么样子?

\n\n

编辑2.0:

\n\n
m_pAudioStream->time_base = (AVRational){1, VIDEO_FRAME_RATE};\nm_pVideoStream->time_base = (AVRational){1, VIDEO_FRAME_RATE};\n
Run Code Online (Sandbox Code Playgroud)\n\n

\n\n
bool AudioVideoRecorder::WriteAudio(const unsigned char *pEncodedData, size_t iDataSize) {\n    ...........................\n    ......................\n    AVPacket pkt = {0};\n    av_init_packet(&pkt);\n\n    int64_t dts = av_gettime() / 1000; // convert into millisecond\n    dts = dts * VIDEO_FRAME_RATE;\n    if(m_dtsOffset < 0) {\n        m_dtsOffset = dts;\n    }\n\n    pkt.pts = AV_NOPTS_VALUE;\n    pkt.dts = (dts - m_dtsOffset);\n\n    pkt.stream_index = m_pAudioStream->index;\n    pkt.flags |= AV_PKT_FLAG_KEY;\n    pkt.data = (uint8_t*) pEncodedData;\n    pkt.size = iDataSize;\n\n    int ret = av_interleaved_write_frame(m_pFormatCtx, &pkt);\n    if(ret < 0) {\n        LogErr("Writing audio frame failed: %d", ret);\n        return false;\n    }\n\n    Log("Writing audio frame done.");\n\n    av_free_packet(&pkt);\n    return true;\n}\n\nbool AudioVideoRecorder::WriteVideo(const unsigned char *pData, size_t iDataSize, bool const bIFrame) {\n    ........................................\n    .................................\n    AVPacket pkt = {0};\n    av_init_packet(&pkt);\n    int64_t dts = av_gettime() / 1000;\n    dts = dts * VIDEO_FRAME_RATE;\n    if(m_dtsOffset < 0) {\n        m_dtsOffset = dts;\n    }\n    pkt.pts = AV_NOPTS_VALUE;\n    pkt.dts = (dts - m_dtsOffset);\n\n    if(bIFrame) {\n        pkt.flags |= AV_PKT_FLAG_KEY;\n    }\n    pkt.stream_index = m_pVideoStream->index;\n    pkt.data = (uint8_t*) pData;\n    pkt.size = iDataSize;\n\n    int ret = av_interleaved_write_frame(m_pFormatCtx, &pkt);\n\n    if(ret < 0) {\n        LogErr("Writing video frame failed.");\n        return false;\n    }\n\n    Log("Writing video frame done.");\n\n    av_free_packet(&pkt);\n    return true;\n}\n
Run Code Online (Sandbox Code Playgroud)\n\n

最后的改动可以吗?视频和音频似乎同步。唯一的问题是 - 无论数据包延迟到达,音频都会毫无延迟地播放。\n例如 -

\n\n

数据包到达:1 2 3 4...(然后下一帧在 3 秒后到达).. 5

\n\n

播放音频:1 2 3 4(无延迟)5

\n\n

编辑3.0:

\n\n

归零音频样本数据:

\n\n
AVFrame* pSilentData;\npSilentData = av_frame_alloc();\nmemset(&pSilentData->data[0], 0, iDataSize);\n\npkt.data = (uint8_t*) pSilentData;\npkt.size = iDataSize;\n\nav_freep(&pSilentData->data[0]);\nav_frame_free(&pSilentData);\n
Run Code Online (Sandbox Code Playgroud)\n\n

这个可以吗?但是写入文件容器后,播放媒体时出现点点噪音。有什么问题?

\n\n

编辑4.0:

\n\n

那么,对于 \xc2\xb5-Law 音频,零值表示为0xff。所以 -

\n\n
memset(&pSilentData->data[0], 0xff, iDataSize);\n
Run Code Online (Sandbox Code Playgroud)\n\n

解决我的问题。

\n

Ron*_*tje 3

时间戳(例如dts)应采用 AVStream.time_base 单位。您请求 1/90000 的视频时基和默认音频时基 (1/9000),但使用 1/100000 的时基写入 dts 值。我也不确定是否保证在标头写入期间维护请求的时基,您的复用器可能会更改值并期望您处理新值。

所以代码是这样的:

int64_t dts = av_gettime();
dts = av_rescale_q(dts, (AVRational){1, 1000000}, (AVRational){1, 90000});
int duration = AUDIO_STREAM_DURATION; // 20
if(m_prevAudioDts > 0LL) {
    duration = dts - m_prevAudioDts;
}
Run Code Online (Sandbox Code Playgroud)

行不通。将其更改为使用音频流时基的内容,并且不要设置持续时间,除非您知道自己在做什么。(视频也一样。)

m_prevAudioDts = dts;
pkt.pts = AV_NOPTS_VALUE;
pkt.dts = m_currAudioDts;
m_currAudioDts += duration;
pkt.duration = duration;
Run Code Online (Sandbox Code Playgroud)

这看起来令人毛骨悚然,尤其是与视频类似的代码结合在一起。这里的问题是,无论流之间的数据包间延迟如何,两者的第一个数据包的时间戳都为零。您需要在所有流之间共享一个父 currDts,否则您的流将永远不同步。

[编辑]

因此,关于您的编辑,如果您有音频间隙,我认为您需要在间隙持续时间内插入静音(归零的音频样本数据)。