与队列相比,Tensorflow 数据集非常慢

Pek*_*kka 5 python tensorflow

使用 Dataset-API 执行相同的任务似乎比使用队列慢 10-100 倍。

这就是我试图用数据集做的事情:

dataset = tf.data.TFRecordDataset(filenames).repeat()
dataset = dataset.batch(100)
dataset = dataset.map(_parse_function)
dataset = dataset.prefetch(1000)
d = dataset.make_one_shot_iterator()

%timeit -n 200 sess.run(d.get_next())
Run Code Online (Sandbox Code Playgroud)

这与队列:

filename_queue = tf.train.string_input_producer(filenames, capacity=1)

reader = tf.TFRecordReader()
_, serialized_example = reader.read_up_to(filename_queue, 100)

features = _parse_function(serialized_example)

coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(sess=sess, coord=coord)
tf.train.start_queue_runners()

%timeit -n 200 sess.run(features)
Run Code Online (Sandbox Code Playgroud)

观察结果:

数据集: 23.6 ms ± 8.73 ms per loop (mean ± std. dev. of 7 runs, 200 loops each)

队列: 481 µs ± 91.7 µs per loop (mean ± std. dev. of 7 runs, 200 loops each)

为什么会发生这种情况?如何使数据集工作得更快?


使用 tensorflow 1.4 和 python 3.5

要重现的完整代码:

import tensorflow as tf
import numpy as np
import glob
import os


def _int64_feature(value):
    return tf.train.Feature(int64_list=tf.train.Int64List(value=value))


def create_data(i):
    tfrecords_filename = '_temp/dstest/tt%d.tfr' % i

    writer = tf.python_io.TFRecordWriter(tfrecords_filename)

    for j in range(1000):
        f = tf.train.Features(feature={
            'x': _int64_feature([j]),
            "y": _int64_feature(np.random.randint(5, 100, size=np.random.randint(6)))
        })

        example = tf.train.Example(features=f)
        writer.write(example.SerializeToString())

    writer.close()
    return tfrecords_filename


def _parse_function(example_proto):
    features = {
        "x": tf.FixedLenFeature((), tf.int64),
        "y": tf.FixedLenSequenceFeature((), tf.int64, allow_missing=True)
    }
    parsed_features = tf.parse_example(example_proto, features)
    return parsed_features


os.makedirs("_temp/dstest", exist_ok=True)
sess = tf.InteractiveSession()

filenames = [create_data(i) for i in range(5)]

#### DATASET
dataset = tf.data.TFRecordDataset(filenames).repeat()
dataset = dataset.batch(100)
dataset = dataset.map(_parse_function)
dataset = dataset.prefetch(1000)
d = dataset.make_one_shot_iterator()

%timeit -n 200 sess.run(d.get_next())

#### QUEUE
filename_queue = tf.train.string_input_producer(filenames, capacity=1)

reader = tf.TFRecordReader()
_, serialized_example = reader.read_up_to(filename_queue, 100)

features = _parse_function(serialized_example)

coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(sess=sess, coord=coord)
tf.train.start_queue_runners()

%timeit -n 200 sess.run(features)

coord.request_stop()
coord.join(threads)
Run Code Online (Sandbox Code Playgroud)

Pek*_*kka 5

哦,我想通了。我不应该d.get_next()多次打电话。

当我将其更改为:

d = dataset.make_one_shot_iterator().get_next()
%timeit -n 200 sess.run(d)
Run Code Online (Sandbox Code Playgroud)

那么速度和队列版差不多,甚至没有预取。

并且需要调用的结果sess.run总是不同的。