TensorFlow - 从TFRecords文件中读取视频帧

ver*_*man 7 python deep-learning tensorflow tfrecord tensorflow-datasets

TLDR; 我的问题是如何从TFRecords加载压缩视频帧.

我正在建立一个数据管道,用于在大型视频数据集(Kinetics)上训练深度学习模型.为此,我使用的是TensorFlow,更具体地说是结构tf.data.DatasetTFRecordDataset结构.由于数据集包含大约30万个10秒的视频,因此需要处理大量数据.在训练期间,我想从视频中随机采样64个连续帧,因此快速随机采样非常重要.为实现这一目标,培训期间可能存在许多数据加载方案:

  1. 来自视频的示例.使用ffmpegor OpenCV和示例帧加载视频.在视频中寻找是不太理想的,并且解码视频流比解码JPG要慢得多.
  2. JPG图片.通过将所有视频帧提取为JPG来预处理数据集.这会生成大量文件,由于随机访问可能不会很快.
  3. 数据容器.将数据集预处理为TFRecordsHDF5文件.需要更多工作才能准备好管道,但最有可能是这些选项中最快的.

我决定使用选项(3)并使用TFRecord文件来存储数据集的预处理版本.但是,这也不像看起来那么简单,例如:

  1. 压缩.将视频帧存储为TFRecords中的未压缩字节数据将需要大量磁盘空间.因此,我提取所有视频帧,应用JPG压缩并将压缩字节存储为TFRecords.
  2. 视频数据.我们正在处理视频,因此TFRecords文件中的每个示例都会非常大并且包含几个视频帧(对于10秒的视频,通常为250-300,具体取决于帧速率).

我编写了以下代码来预处理视频数据集,并将视频帧写为TFRecord文件(每个大小约为5GB):

def _int64_feature(value):
    """Wrapper for inserting int64 features into Example proto."""
    if not isinstance(value, list):
        value = [value]
    return tf.train.Feature(int64_list=tf.train.Int64List(value=value))

def _bytes_feature(value):
    """Wrapper for inserting bytes features into Example proto."""
    return tf.train.Feature(bytes_list=tf.train.BytesList(value=[value]))


with tf.python_io.TFRecordWriter(output_file) as writer:

  # Read and resize all video frames, np.uint8 of size [N,H,W,3]
  frames = ... 

  features = {}
  features['num_frames']  = _int64_feature(frames.shape[0])
  features['height']      = _int64_feature(frames.shape[1])
  features['width']       = _int64_feature(frames.shape[2])
  features['channels']    = _int64_feature(frames.shape[3])
  features['class_label'] = _int64_feature(example['class_id'])
  features['class_text']  = _bytes_feature(tf.compat.as_bytes(example['class_label']))
  features['filename']    = _bytes_feature(tf.compat.as_bytes(example['video_id']))

  # Compress the frames using JPG and store in as bytes in:
  # 'frames/000001', 'frames/000002', ...
  for i in range(len(frames)):
      ret, buffer = cv2.imencode(".jpg", frames[i])
      features["frames/{:04d}".format(i)] = _bytes_feature(tf.compat.as_bytes(buffer.tobytes()))

  tfrecord_example = tf.train.Example(features=tf.train.Features(feature=features))
  writer.write(tfrecord_example.SerializeToString())
Run Code Online (Sandbox Code Playgroud)

这很好用; 数据集很好地写为TFRecord文件,帧为压缩JPG字节.我的问题是,如何在训练期间读取TFRecord文件,从视频中随机采样64帧并解码JPG图像.

根据TensorFlow的文档,tf.Data我们需要做类似的事情:

filenames = tf.placeholder(tf.string, shape=[None])
dataset = tf.data.TFRecordDataset(filenames)
dataset = dataset.map(...)  # Parse the record into tensors.
dataset = dataset.repeat()  # Repeat the input indefinitely.
dataset = dataset.batch(32)
iterator = dataset.make_initializable_iterator()
training_filenames = ["/var/data/file1.tfrecord", "/var/data/file2.tfrecord"]
sess.run(iterator.initializer, feed_dict={filenames: training_filenames})
Run Code Online (Sandbox Code Playgroud)

关于如何使用图像执行此操作有很多示例,这非常简单.但是,对于帧的视频和随机采样,我被卡住了.该tf.train.Features对象将帧存储为frame/00001,frame/000002等等.我的第一个问题是如何从dataset.map()函数内部随机抽样一组连续帧?考虑因为JPG压缩每个帧具有可变数量的字节并且需要使用来解码tf.image.decode_jpeg.

任何帮助如何最好地设置从TFRecord文件读取视频采样将不胜感激!

mrr*_*rry 6

将每个帧编码为单独的特征使得难以动态地选择帧,因为tf.parse_example()(和tf.parse_single_example())的签名要求在图形构建时固定解析的特征名称集.但是,您可以尝试将帧编码为包含JPEG编码字符串列表的单个功能:

def _bytes_list_feature(values):
    """Wrapper for inserting bytes features into Example proto."""
    return tf.train.Feature(bytes_list=tf.train.BytesList(value=values))

with tf.python_io.TFRecordWriter(output_file) as writer:

  # Read and resize all video frames, np.uint8 of size [N,H,W,3]
  frames = ... 

  features = {}
  features['num_frames']  = _int64_feature(frames.shape[0])
  features['height']      = _int64_feature(frames.shape[1])
  features['width']       = _int64_feature(frames.shape[2])
  features['channels']    = _int64_feature(frames.shape[3])
  features['class_label'] = _int64_feature(example['class_id'])
  features['class_text']  = _bytes_feature(tf.compat.as_bytes(example['class_label']))
  features['filename']    = _bytes_feature(tf.compat.as_bytes(example['video_id']))

  # Compress the frames using JPG and store in as a list of strings in 'frames'
  encoded_frames = [tf.compat.as_bytes(cv2.imencode(".jpg", frame)[1].tobytes())
                    for frame in frames]
  features['frames'] = _bytes_list_feature(encoded_frames)

  tfrecord_example = tf.train.Example(features=tf.train.Features(feature=features))
  writer.write(tfrecord_example.SerializeToString())
Run Code Online (Sandbox Code Playgroud)

完成此操作后,可以frames使用修改后的解析代码版本动态切片功能:

def decode(serialized_example, sess):
  # Prepare feature list; read encoded JPG images as bytes
  features = dict()
  features["class_label"] = tf.FixedLenFeature((), tf.int64)
  features["frames"] = tf.VarLenFeature(tf.string)
  features["num_frames"] = tf.FixedLenFeature((), tf.int64)

  # Parse into tensors
  parsed_features = tf.parse_single_example(serialized_example, features)

  # Randomly sample offset from the valid range.
  random_offset = tf.random_uniform(
      shape=(), minval=0,
      maxval=parsed_features["num_frames"] - SEQ_NUM_FRAMES, dtype=tf.int64)

  offsets = tf.range(random_offset, random_offset + SEQ_NUM_FRAMES)

  # Decode the encoded JPG images
  images = tf.map_fn(lambda i: tf.image.decode_jpeg(parsed_features["frames"].values[i]),
                     offsets)

  label  = tf.cast(parsed_features["class_label"], tf.int64)

  return images, label
Run Code Online (Sandbox Code Playgroud)

(请注意,我无法运行您的代码,因此可能会出现一些小错误,但希望这足以让您入门.)


Fáb*_*bio 5

由于您使用的是非常相似的依赖项,因此我建议您查看以下 Python 包,因为它可以解决您的确切问题设置:

pip install video2tfrecord
Run Code Online (Sandbox Code Playgroud)

或参考https://github.com/ferreirafabio/video2tfrecord。它还应该具有足够的适应性以供使用tf.data.Dataset

免责声明:我是该包的作者之一。