Aar*_*ung 5 python numpy video-processing python-3.x pyav
是否可以使用 PyAV 将视频直接读入 3D Numpy?目前,我正在循环每一帧:
i = 0
container = av.open('myvideo.avi')
for frame in container.decode(video=0):
if i == 0: V = np.array(frame.to_ndarray(format='gray'))
else: V = np.dstack((V, np.array(frame.to_ndarray(format='gray'))))
i += 1
Run Code Online (Sandbox Code Playgroud)
第一帧定义了一个2D Numpy数组(i=0);每个后续帧 (i>0) 都使用 堆栈到第一个数组上np.dstack。理想情况下,我想一次性将整个视频读入 3D Numpy 灰度帧数组。
我找不到使用 PyAV 的解决方案,并使用ffmpeg-python代替。
ffmpeg-python是FFmpeg的 Pythonic 绑定,如PyAV。
该代码将整个视频一次性读取到灰度帧的 3D NumPy 数组中。
该解决方案执行以下步骤:
n x height x widthNumPy 数组。代码如下(请阅读评论):
import ffmpeg
import numpy as np
from PIL import Image
in_filename = 'in.avi'
"""Build synthetic video, for testing begins:"""
# ffmpeg -y -r 10 -f lavfi -i testsrc=size=160x120:rate=1 -c:v libx264 -t 5 in.mp4
width, height = 160, 120
(
ffmpeg
.input('testsrc=size={}x{}:rate=1'.format(width, height), r=10, f='lavfi')
.output(in_filename, vcodec='libx264', t=5)
.overwrite_output()
.run()
)
"""Build synthetic video ends"""
# Use FFprobe for getting the resolution of the video frames
p = ffmpeg.probe(in_filename, select_streams='v');
width = p['streams'][0]['width']
height = p['streams'][0]['height']
# https://github.com/kkroening/ffmpeg-python/blob/master/examples/README.md
# Stream the entire video as one large array of bytes
in_bytes, _ = (
ffmpeg
.input(in_filename)
.video # Video only (no audio).
.output('pipe:', format='rawvideo', pix_fmt='gray') # Set the output format to raw video in 8 bit grayscale
.run(capture_stdout=True)
)
n_frames = len(in_bytes) // (height*width) # Compute the number of frames.
frames = np.frombuffer(in_bytes, np.uint8).reshape(n_frames, height, width) # Reshape buffer to array of n_frames frames (shape of each frame is (height, width)).
im = Image.fromarray(frames[0, :, :]) # Convert first frame to image object
im.show() # Display the image
Run Code Online (Sandbox Code Playgroud)
使用PyAV:
使用 PyAV 时,我们必须逐帧解码视频。
与 ffmpeg-python 相比,使用 PyAV 的主要优点是我们可以在没有 FFmpeg CLI 的情况下使用它(ffmpeg.exe在 Windows 中不需要)。
为了将所有视频帧读入一个 NumPy 数组,我们可以使用以下阶段:
代码示例(使用 OpenCV 显示测试框架):
import av
import numpy as np
import cv2
# Build input file using FFmpeg CLI (for testing):
# ffmpeg -y -f lavfi -i testsrc=size=192x108:rate=1:duration=10 -vcodec libx264 -pix_fmt yuv420p myvideo.avi
container = av.open('myvideo.avi')
frames = [] # List of frames - store video frames after converting to NumPy array.
for frame in container.decode(video=0):
# Decode video frame, and convert to NumPy array in BGR pixel format (use BGR because it used by OpenCV).
frame = frame.to_ndarray(format='bgr24') # For Grayscale video, use: frame = frame.to_ndarray(format='gray')
frames.append(frame) # Append the frame to the list of frames.
# Convert the list to NumPy array.
# Shape of each frame is (height, width, 3) [for Grayscale the shape is (height, width)]
# the shape of frames is (n_frames, height, width, 3) [for Grayscale the shape is (n_frames, height, width)]
frames = np.array(frames)
# Show the frames for testing:
for i in range(len(frames)):
cv2.imshow('frame', frames[i])
cv2.waitKey(1000)
cv2.destroyAllWindows()
Run Code Online (Sandbox Code Playgroud)
| 归档时间: |
|
| 查看次数: |
6195 次 |
| 最近记录: |