FFmpeg跳过渲染帧

Srd*_* M. 12 c# windows ffmpeg

当我从视频中提取帧时,我注意到ffmpeg不会完成渲染某些图像.问题最终是两个jpeg图像之间的字节"填充" .如果我的缓冲区大小是,4096并且如果在该缓冲区中找到来自前一图像和下一图像的字节,并且如果它们没有被任意数量的字节分开,则下一图像不能正确呈现.这是为什么?

-i path -f image2pipe -c:v mjpeg -q:v 2 -vf fps=25 pipe:1
Run Code Online (Sandbox Code Playgroud)

在此输入图像描述

渲染帧:

在此输入图像描述

代码示例:

public void ExtractFrames()
{
    string FFmpegPath = "Path...";
    string Arguments = $"-i { VideoPath } -f image2pipe -c:v mjpeg -q:v 2 -vf fps=25/1 pipe:1";
    using (Process cmd = GetProcess(FFmpegPath, Arguments))
    {
        cmd.Start();
        FileStream fStream = cmd.StandardOutput.BaseStream as FileStream;

        bool Add = false;
        int i = 0, n = 0, BufferSize = 4096;
        byte[] buffer = new byte[BufferSize + 1];

        MemoryStream mStream = new MemoryStream();

        while (true)
        {
            if (i.Equals(BufferSize))
            {
                i = 0;
                buffer[0] = buffer[BufferSize];
                if (fStream.Read(buffer, 1, BufferSize) == 0)
                    break;
            }

            if (buffer[i].Equals(255) && buffer[i + 1].Equals(216))
            {
                Add = true;
            }

            if (buffer[i].Equals(255) && buffer[i + 1].Equals(217))
            {
                n++;
                Add = false;
                mStream.Write(new byte[] { 255, 217 }, 0, 2);
                File.WriteAllBytes($@"C:\Path...\{n}.jpg", mStream.ToArray());
                mStream = new MemoryStream();
            }

            if (Add)
                mStream.WriteByte(buffer[i]);

            i++;
        }
        cmd.WaitForExit();
        cmd.Close();
    }
}

private Process GetProcess(string FileName, string Arguments)
{
    return new Process
    {
        StartInfo = new ProcessStartInfo
        {
            FileName = FileName,
            Arguments = Arguments,
            UseShellExecute = false,
            RedirectStandardOutput = true,
            CreateNoWindow = false,
        }
    };
}
Run Code Online (Sandbox Code Playgroud)

长度为60秒或更长的视频样本(> 480p)应用于测试目的.

VC.*_*One 3

如果文件已存储,那么告诉 FFmpeg 将该视频文件转换为 Jpeg 可能会更容易。

(1)读取视频文件并输出帧Jpegs(不涉及管道或内存/文件流):

string str_MyProg = "C:/FFmpeg/bin/ffmpeg.exe";
string VideoPath = "C:/someFolder/test_vid.mp4";

string save_folder = "C:/someOutputFolder/";

//# Setup the arguments to directly output a sequence of images (frames)
string str_CommandArgs = "-i " + VideoPath + " -vf fps=25/1 " + save_folder + "n_%03d.jpg"; //the n_%03d replaces "n++" count

System.Diagnostics.ProcessStartInfo cmd_StartInfo = new System.Diagnostics.ProcessStartInfo(str_MyProg, str_CommandArgs);

cmd_StartInfo.RedirectStandardError = false; //set false
cmd_StartInfo.RedirectStandardOutput = false; //set false
cmd_StartInfo.UseShellExecute = true; //set true
cmd_StartInfo.CreateNoWindow = true;  //don't need the black window

//Create a process, assign its ProcessStartInfo and start it
System.Diagnostics.Process cmd = new System.Diagnostics.Process();
cmd.StartInfo = cmd_StartInfo;

cmd.Start();

//# Started process. Check output folder for images...
Run Code Online (Sandbox Code Playgroud)

(2)管道法:

使用管道时,FFmpeg 将像广播一样流回输出。如果到达最后一个视频帧,则相同的最后一帧“图像”将无限重复。您必须手动告诉FFmpeg 何时停止发送到您的应用程序(在这种情况下没有“退出”代码)。

代码中的这一行将指定在停止之前如何提取任何帧:

int frames_expected_Total = 0; //is... (frame_rate x Duration) = total expected frames
Run Code Online (Sandbox Code Playgroud)

您可以将限制计算为:input-Duration / output-FPSoutput-FPS * input-Duration
示例:视频时长为 4.88 秒,因此25 * 4.88 =该视频的帧数限制为 122 帧。

“如果我的缓冲区大小是4096 ...那么下一个图像无法正确渲染。这是为什么?”

您有“故障”图像,因为缓冲区太小而无法容纳完整图像......

缓冲区大小公式为:

int BufferSize = ( video_Width * video_Height );
Run Code Online (Sandbox Code Playgroud)

因为最终压缩的 jpeg 将小于这个数量,所以它保证BufferSize可以容纳任何完整的帧而不会出现错误。出于兴趣,你从哪里得到4096号码?标准输出通常给出的最大数据包大小为 32kb(32768字节)。

解决方案(已测试)
这是一个解决“故障”图像问题的完整工作示例,请检查代码注释...

using System;
using System.IO;
using System.Net;
using System.Drawing;
using System.Diagnostics;
using System.Collections.Generic;


namespace FFmpeg_Vid_to_JPEG //replace with your own project "namespace"
{
    class Program
    {
        public static void Main(string[] args)
        {
            //# testing the Extract function...

            ExtractFrames();
        }

        public static void ExtractFrames()
        {
            //# define paths for PROCESS
            string FFmpegPath = "C:/FFmpeg/bin/ffmpeg.exe";
            string VideoPath = "C:/someFolder/test_vid.mp4";

            //# FFmpeg arguments for PROCESS
            string str_myCommandArgs = "-i " + VideoPath + " -f image2pipe -c:v mjpeg -q:v 2 -vf fps=25/1 pipe:1";

            //# define paths for SAVE folder & filename
            string save_folder = "C:/someOutputFolder/";
            string save_filename = ""; //update name later on, during SAVE commands

            MemoryStream mStream = new MemoryStream(); //create once, recycle same for each frame

            ////// # also create these extra variables...

            bool got_current_JPG_End = false; //flag to begin extraction of image bytes within stream

            int pos_in_Buffer = 0; //pos in buffer(when checking for Jpeg Start/End bytes)
            int this_jpeg_len = 0; // holds bytes of single jpeg image to save... correct length avoids cropping effect
            int pos_jpeg_start = 0; int pos_jpeg_end = 0; //marks the start/end pos of one image within total stream

            int jpeg_count = 0; //count of exported Jpeg files (replaces the "n++" count)
            int frames_expected_Total = 0; //number of frames to get before stopping

            //# use input video's width x height as buffer size //eg: size 921600 = 1280 W x 720H 
            int BufferSize = 921600;  
            byte[] buffer = new byte[BufferSize + 1];

            // Create a process, assign its ProcessStartInfo and start it
            ProcessStartInfo cmd_StartInfo = new ProcessStartInfo(FFmpegPath, str_myCommandArgs);

            cmd_StartInfo.RedirectStandardError = true;
            cmd_StartInfo.RedirectStandardOutput = true; //set true to redirect the process stdout to the Process.StandardOutput StreamReader
            cmd_StartInfo.UseShellExecute = false;
            cmd_StartInfo.CreateNoWindow = true; //do not create the black window

            Process cmd = new System.Diagnostics.Process();
            cmd.StartInfo = cmd_StartInfo;

            cmd.Start();

            if (cmd.Start())
            {
                //# holds FFmpeg output bytes stream...
                var ffmpeg_Output = cmd.StandardOutput.BaseStream; //replaces: fStream = cmd.StandardOutput.BaseStream as FileStream;

                cmd.BeginErrorReadLine(); //# begin receiving FFmpeg output bytes stream

                //# get (read) first two bytes in stream, so can check for Jpegs' SOI (xFF xD8)
                //# each "Read" auto moves forward by read "amount"...
                ffmpeg_Output.Read(buffer, 0, 1);
                ffmpeg_Output.Read(buffer, 1, 1);

                pos_in_Buffer = this_jpeg_len = 2; //update reading pos

                //# we know first jpeg's SOI is always at buffer pos: [0] and [1]
                pos_jpeg_start = 0; got_current_JPG_End = false;

                //# testing amount... Duration 4.88 sec, FPS 25 --> (25 x 4.88) = 122 frames        
                frames_expected_Total = 122; //122; //number of Jpegs to get before stopping.

                while(true)
                {
                    //# For Pipe video you must exit stream manually
                    if ( jpeg_count == (frames_expected_Total + 1) )
                    {
                        cmd.Close(); cmd.Dispose(); //exit the process
                        break; //exit if got required number of frame Jpegs
                    }

                    //# otherwise read as usual    
                    ffmpeg_Output.Read(buffer, pos_in_Buffer, 1);
                    this_jpeg_len +=1; //add 1 to expected jpeg bytes length

                    //# find JPEG start (SOI is bytes 0xFF 0xD8)
                    if ( (buffer[pos_in_Buffer] == 0xD8)  && (buffer[pos_in_Buffer-1] == 0xFF) )
                    {
                        if  (got_current_JPG_End == true) 
                        {   
                            pos_jpeg_start = (pos_in_Buffer-1);
                            got_current_JPG_End = false; 
                        }
                    }

                    //# find JPEG ending (EOI is bytes 0xFF 0xD9) then SAVE FILE
                    if ( (buffer[pos_in_Buffer] == 0xD9) && (buffer[pos_in_Buffer-1] == 0xFF) )
                    {
                        if  (got_current_JPG_End == false) 
                        { 
                            pos_jpeg_end = pos_in_Buffer; got_current_JPG_End = true;

                            //# update saved filename 
                            save_filename = save_folder + "n_" + (jpeg_count).ToString() + ".jpg";

                            try
                            {
                                //# If the Jpeg save folder doesn't exist, create it.
                                if ( !Directory.Exists( save_folder ) ) { Directory.CreateDirectory( save_folder ); }
                            } 
                            catch (Exception)
                            { 
                                //# handle any folder create errors here.
                            }

                            mStream.Write(buffer, pos_jpeg_start, this_jpeg_len); //

                            //# save to disk...
                            File.WriteAllBytes(@save_filename, mStream.ToArray());

                            //recycle MemoryStream, avoids creating multiple = new MemoryStream();
                            mStream.SetLength(0); mStream.Position = 0;

                            //# reset for next pic
                            jpeg_count +=1; this_jpeg_len=0;

                            pos_in_Buffer = -1; //allows it to become 0 position at incrementation part
                        }
                    }

                    pos_in_Buffer += 1; //increment to store next byte in stdOut stream

                } //# end While

            }
            else
            {
               // Handler code here for "Process is not running" situation
            }

        } //end ExtractFrame function


    } //end class
} //end program
Run Code Online (Sandbox Code Playgroud)

注意:修改上述代码时,请确保将创建保留Process在函数ExtractFrames()本身内,如果您使用某些外部函数返回Process. 不要设置为:using (Process cmd = GetProcess(FFmpegPath, Arguments))

祝你好运。让我知道事情的后续。

(PS:请原谅“太多”的代码注释,这是为了将来的读者的利益,他们可能会也可能不明白这段代码在缓冲区问题上正确工作的作用)。