Lio*_*Lai 13 python dataset tensorflow
我想知道的区别make_initializable_iterator和make_one_shot_iterator.
1. Tensorflow文件说A "one-shot" iterator does not currently support re-initialization.这究竟是什么意思?
2.以下2个片段是否相同?
使用make_initializable_iterator
iterator = data_ds.make_initializable_iterator()
data_iter = iterator.get_next()
sess = tf.Session()
sess.run(tf.global_variables_initializer())
for e in range(1, epoch+1):
sess.run(iterator.initializer)
while True:
try:
x_train, y_train = sess.run([data_iter])
_, cost = sess.run([train_op, loss_op], feed_dict={X: x_train,
Y: y_train})
except tf.errors.OutOfRangeError:
break
sess.close()
Run Code Online (Sandbox Code Playgroud)
使用 make_one_shot_iterator
iterator = data_ds.make_one_shot_iterator()
data_iter = iterator.get_next()
sess = tf.Session()
sess.run(tf.global_variables_initializer())
for e in range(1, epoch+1):
while True:
try:
x_train, y_train = sess.run([data_iter])
_, cost = sess.run([train_op, loss_op], feed_dict={X: x_train,
Y: y_train})
except tf.errors.OutOfRangeError:
break
sess.close()
Run Code Online (Sandbox Code Playgroud)
Sco*_*ith 12
假设您要使用相同的代码进行培训和验证.您可能希望使用相同的迭代器,但初始化为指向不同的数据集; 类似以下内容:
def _make_batch_iterator(filenames):
dataset = tf.data.TFRecordDataset(filenames)
...
return dataset.make_initializable_iterator()
filenames = tf.placeholder(tf.string, shape=[None])
iterator = _make_batch_iterator(filenames)
with tf.Session() as sess:
for epoch in range(num_epochs):
# Initialize iterator with training data
sess.run(iterator.initializer,
feed_dict={filenames: ['training.tfrecord']})
_train_model(...)
# Re-initialize iterator with validation data
sess.run(iterator.initializer,
feed_dict={filenames: ['validation.tfrecord']})
_validate_model(...)
Run Code Online (Sandbox Code Playgroud)
使用一次性迭代器,您无法像这样重新初始化它.