TensorFlow:不可重复的结果

Doj*_*ira 13 python random tensorflow

问题

我有一个Python脚本,它使用TensorFlow创建一个多层感知器网络(带有丢失),以便进行二进制分类.即使我一直小心设置Python和TensorFlow种子,但我得到了不可重复的结果.如果我跑一次然后再跑,我会得到不同的结果.我甚至可以运行一次,退出Python,重新启动Python,再次运行并获得不同的结果.

我试过的

我知道有些人发布了关于在TensorFlow中获得不可重复结果的问题(例如,"如何获得稳定的结果...","set_random_seed无效......","如何在TensorFlow中获得可重现的结果"),以及答案通常被证明是错误的使用/理解tf.set_random_seed().我已经确保实施所提供的解决方案,但这并没有解决我的问题.

一个常见的错误是没有意识到这tf.set_random_seed()只是一个图级别的种子,并且多次运行脚本会改变图形,解释不可重复的结果.我使用以下语句打印出整个图表并验证(通过差异)即使结果不同,图表也是相同的.

print [n.name for n in tf.get_default_graph().as_graph_def().node]
Run Code Online (Sandbox Code Playgroud)

我也使用函数调用,tf.reset_default_graph()tf.get_default_graph().finalize()避免对图形进行任何更改,即使这可能是过度杀伤.

(相关)守则

我的脚本长约360行,所以这里是相关的行(显示了剪切代码).ALL_CAPS中的任何项目都是在Parameters下面的块中定义的常量.

import numpy as np
import tensorflow as tf

from copy import deepcopy
from tqdm import tqdm  # Progress bar

# --------------------------------- Parameters ---------------------------------
(snip)

# --------------------------------- Functions ---------------------------------
(snip)

# ------------------------------ Obtain Train Data -----------------------------
(snip)

# ------------------------------ Obtain Test Data -----------------------------
(snip)

random.seed(12345)
tf.set_random_seed(12345)

(snip)

# ------------------------- Build the TensorFlow Graph -------------------------

tf.reset_default_graph()

with tf.Graph().as_default():

    x = tf.placeholder("float", shape=[None, N_INPUT])
    y_ = tf.placeholder("float", shape=[None, N_CLASSES])

    # Store layers weight & bias
    weights = {
        'h1': tf.Variable(tf.random_normal([N_INPUT, N_HIDDEN_1])),
        'h2': tf.Variable(tf.random_normal([N_HIDDEN_1, N_HIDDEN_2])),
        'h3': tf.Variable(tf.random_normal([N_HIDDEN_2, N_HIDDEN_3])),
        'out': tf.Variable(tf.random_normal([N_HIDDEN_3, N_CLASSES]))
    }

    biases = {
        'b1': tf.Variable(tf.random_normal([N_HIDDEN_1])),
        'b2': tf.Variable(tf.random_normal([N_HIDDEN_2])),
        'b3': tf.Variable(tf.random_normal([N_HIDDEN_3])),
        'out': tf.Variable(tf.random_normal([N_CLASSES]))
    }

# Construct model
    pred = multilayer_perceptron(x, weights, biases, USE_DROP_LAYERS, DROP_KEEP_PROB)

    mean1 = tf.reduce_mean(weights['h1'])
    mean2 = tf.reduce_mean(weights['h2'])
    mean3 = tf.reduce_mean(weights['h3'])

    cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(pred, y_))

    regularizers = (tf.nn.l2_loss(weights['h1']) + tf.nn.l2_loss(biases['b1']) +
                    tf.nn.l2_loss(weights['h2']) + tf.nn.l2_loss(biases['b2']) +
                    tf.nn.l2_loss(weights['h3']) + tf.nn.l2_loss(biases['b3']))

    cost += COEFF_REGULAR * regularizers

    optimizer = tf.train.GradientDescentOptimizer(LEARNING_RATE).minimize(cost)

    out_labels = tf.nn.softmax(pred)

    sess = tf.InteractiveSession()
    sess.run(tf.initialize_all_variables())

    tf.get_default_graph().finalize()  # Lock the graph as read-only

    #Print the default graph in text form    
    print [n.name for n in tf.get_default_graph().as_graph_def().node]

    # --------------------------------- Training ----------------------------------

    print "Start Training"
    pbar = tqdm(total = TRAINING_EPOCHS)
    for epoch in range(TRAINING_EPOCHS):
        avg_cost = 0.0
        batch_iter = 0

        train_outfile.write(str(epoch))

        while batch_iter < BATCH_SIZE:
            train_features = []
            train_labels = []
            batch_segments = random.sample(train_segments, 20)
            for segment in batch_segments:
                train_features.append(segment[0])
                train_labels.append(segment[1])
            sess.run(optimizer, feed_dict={x: train_features, y_: train_labels})
            line_out = "," + str(batch_iter) + "\n"
            train_outfile.write(line_out)
            line_out = ",," + str(sess.run(mean1, feed_dict={x: train_features, y_: train_labels}))
            line_out += "," + str(sess.run(mean2, feed_dict={x: train_features, y_: train_labels}))
            line_out += "," + str(sess.run(mean3, feed_dict={x: train_features, y_: train_labels})) + "\n"
            train_outfile.write(line_out)
            avg_cost += sess.run(cost, feed_dict={x: train_features, y_: train_labels})/BATCH_SIZE
            batch_iter += 1

        line_out = ",,,,," + str(avg_cost) + "\n"
        train_outfile.write(line_out)
        pbar.update(1)  # Increment the progress bar by one

    train_outfile.close()
    print "Completed training"


# ------------------------------ Testing & Output ------------------------------

keep_prob = 1.0  # Do not use dropout when testing

print "now reducing mean"
print(sess.run(mean1, feed_dict={x: test_features, y_: test_labels}))

print "TRUE LABELS"
print(test_labels)
print "PREDICTED LABELS"
pred_labels = sess.run(out_labels, feed_dict={x: test_features})
print(pred_labels)

output_accuracy_results(pred_labels, test_labels)

sess.close()
Run Code Online (Sandbox Code Playgroud)

什么是不可重复的

如您所见,我将每个纪元期间的结果输出到文件,并在结尾处打印出精度数字.尽管我相信我已经正确设置了种子,但这些都不匹配从运行到运行.我用过random.seed(12345)tf.set_random_seed(12345)

如果我需要提供更多信息,请告诉我.并提前感谢任何帮助.

-DG

设置细节

TensorFlow版本0.8.0(仅限CPU)
Enthought Canopy版本1.7.2(Python 2.7,而非3. +)
Mac OS X版本10.11.3

Yar*_*tov 12

除了图级种子之外,您还需要设置操作级别种子,即

tf.reset_default_graph()
a = tf.constant([1, 1, 1, 1, 1], dtype=tf.float32)
graph_level_seed = 1
operation_level_seed = 1
tf.set_random_seed(graph_level_seed)
b = tf.nn.dropout(a, 0.5, seed=operation_level_seed)
Run Code Online (Sandbox Code Playgroud)

  • 哇。您是否需要为 _every_ 操作设置操作级种子?所有的`tf.placeholder`、`tf.Variable`、`tf.reduce_mean` 等等? (2认同)
  • 不,只是那些有随机性的 (2认同)
  • @Yaroslav我不明白:我认为`tf.set_random_seed()`的目的是影响图中的所有随机操作,因此您不必为每个随机运算符手动设置种子.它的用途是什么?从[doc](https://www.tensorflow.org/versions/r0.11/api_docs/python/constant_op.html#set_random_seed)中的示例中,他们只设置全局种子以获得可重现的结果. (2认同)

tot*_*to2 9

请参阅此tensorflow github问题.GPU上的某些操作并不完全确定(速度与精度).

我还注意到,种子有任何效果,tf.set_random_seed(...)必须调用之前Session创建.此外,您应该在每次运行代码时完全重新启动python解释器,或者tf.reset_default_graph()在开始时调用.