加速 matplotlib 动画到视频文件

gag*_*gio 4 python ffmpeg matplotlib raspberry-pi raspbian

在 Raspbian(Raspberry Pi 2)上,从我的脚本中删除的以下最小示例正确生成了一个 mp4 文件:

import numpy as np
import matplotlib.pyplot as plt
from matplotlib import animation

def anim_lift(x, y):

    #set up the figure
    fig = plt.figure(figsize=(15, 9))

    def animate(i):
        # update plot
        pointplot.set_data(x[i], y[i])

        return  pointplot

    # First frame
    ax0 = plt.plot(x,y)
    pointplot, = ax0.plot(x[0], y[0], 'or')

    anim = animation.FuncAnimation(fig, animate, repeat = False,
                                   frames=range(1,len(x)), 
                                   interval=200,
                                   blit=True, repeat_delay=1000)

    anim.save('out.mp4')
    plt.close(fig)

# Number of frames
nframes = 200

# Generate data
x = np.linspace(0, 100, num=nframes)
y = np.random.random_sample(np.size(x))

anim_lift(x, y)
Run Code Online (Sandbox Code Playgroud)

现在,生成的文件质量很好,文件很小,但是生成 170 帧电影需要 15 分钟,这对我的应用程序来说是不可接受的。我正在寻找显着的加速,视频文件大小的增加不是问题。

我认为视频制作的瓶颈在于以 png 格式临时保存帧。在处理过程中,我可以看到 png 文件出现在我的工作目录中,CPU 负载仅为 25%。

请提出一个解决方案,那也可能基于不同的包,而不是简单的matplotlib.animation,像OpenCV或(这无论如何已经在我的项目进口)moviepy

使用的版本:

  • 蟒蛇 2.7.3
  • matplotlib 1.1.1rc2
  • ffmpeg 0.8.17-6:0.8.17-1+rpi1

Aul*_*hal 6

将动画保存到文件的瓶颈在于使用figure.savefig(). 这是 matplotlib 的一个自制子类FFMpegWriter,灵感来自 gaggio 的答案。它不使用savefig(因此忽略savefig_kwargs),但需要对您的动画脚本进行最少的更改。

from matplotlib.animation import FFMpegWriter

class FasterFFMpegWriter(FFMpegWriter):
    '''FFMpeg-pipe writer bypassing figure.savefig.'''
    def __init__(self, **kwargs):
        '''Initialize the Writer object and sets the default frame_format.'''
        super().__init__(**kwargs)
        self.frame_format = 'argb'

    def grab_frame(self, **savefig_kwargs):
        '''Grab the image information from the figure and save as a movie frame.

        Doesn't use savefig to be faster: savefig_kwargs will be ignored.
        '''
        try:
            # re-adjust the figure size and dpi in case it has been changed by the
            # user.  We must ensure that every frame is the same size or
            # the movie will not save correctly.
            self.fig.set_size_inches(self._w, self._h)
            self.fig.set_dpi(self.dpi)
            # Draw and save the frame as an argb string to the pipe sink
            self.fig.canvas.draw()
            self._frame_sink().write(self.fig.canvas.tostring_argb()) 
        except (RuntimeError, IOError) as e:
            out, err = self._proc.communicate()
            raise IOError('Error saving animation to file (cause: {0}) '
                      'Stdout: {1} StdError: {2}. It may help to re-run '
                      'with --verbose-debug.'.format(e, out, err)) 
Run Code Online (Sandbox Code Playgroud)

与使用默认FFMpegWriter.

您可以按照本示例中的说明使用。


gag*_*gio 5

一个基于这篇文章的答案的改进解决方案将时间减少了大约 10 倍。

import numpy as np
import matplotlib.pylab as plt
import matplotlib.animation as animation
import subprocess

def testSubprocess(x, y):

    #set up the figure
    fig = plt.figure(figsize=(15, 9))
    canvas_width, canvas_height = fig.canvas.get_width_height()

    # First frame
    ax0 = plt.plot(x,y)
    pointplot, = plt.plot(x[0], y[0], 'or')

    def update(frame):
        # your matplotlib code goes here
        pointplot.set_data(x[frame],y[frame])

    # Open an ffmpeg process
    outf = 'testSubprocess.mp4'
    cmdstring = ('ffmpeg', 
                 '-y', '-r', '1', # overwrite, 1fps
                 '-s', '%dx%d' % (canvas_width, canvas_height), # size of image string
                 '-pix_fmt', 'argb', # format
                 '-f', 'rawvideo',  '-i', '-', # tell ffmpeg to expect raw video from the pipe
                 '-vcodec', 'mpeg4', outf) # output encoding
    p = subprocess.Popen(cmdstring, stdin=subprocess.PIPE)

    # Draw frames and write to the pipe
    for frame in range(nframes):
        # draw the frame
        update(frame)
        fig.canvas.draw()

        # extract the image as an ARGB string
        string = fig.canvas.tostring_argb()

        # write to pipe
        p.stdin.write(string)

    # Finish up
    p.communicate()

# Number of frames
nframes = 200

# Generate data
x = np.linspace(0, 100, num=nframes)
y = np.random.random_sample(np.size(x))

testSubprocess(x, y)
Run Code Online (Sandbox Code Playgroud)

我怀疑通过将原始图像数据传输到 gstreamer 可以类似地获得进一步的加速,gstreamer 现在可以从 Raspberry Pi 上的硬件编码中受益,请参阅此讨论