小编Eva*_*van的帖子

训练将获得更好的验证结果时,将“ tf.layers.batch_normalization”设置为“ training = False”

我使用TensorFlow训练DNN。我了解到批处理规范化对DNN很有帮助,因此我在DNN中使用了它。

我使用“ tf.layers.batch_normalization”并按照API文档的说明构建网络:训练时,将其参数设置为“ training = True ”,验证时,将其设置为“ training = False ”。并添加tf.get_collection(tf.GraphKeys.UPDATE_OPS)

这是我的代码:

# -*- coding: utf-8 -*-
import tensorflow as tf
import numpy as np

input_node_num=257*7
output_node_num=257

tf_X = tf.placeholder(tf.float32,[None,input_node_num])
tf_Y = tf.placeholder(tf.float32,[None,output_node_num])
dropout_rate=tf.placeholder(tf.float32)
flag_training=tf.placeholder(tf.bool)
hid_node_num=2048

h1=tf.contrib.layers.fully_connected(tf_X, hid_node_num, activation_fn=None)
h1_2=tf.nn.relu(tf.layers.batch_normalization(h1,training=flag_training))
h1_3=tf.nn.dropout(h1_2,dropout_rate)

h2=tf.contrib.layers.fully_connected(h1_3, hid_node_num, activation_fn=None)
h2_2=tf.nn.relu(tf.layers.batch_normalization(h2,training=flag_training))
h2_3=tf.nn.dropout(h2_2,dropout_rate)

h3=tf.contrib.layers.fully_connected(h2_3, hid_node_num, activation_fn=None)
h3_2=tf.nn.relu(tf.layers.batch_normalization(h3,training=flag_training))
h3_3=tf.nn.dropout(h3_2,dropout_rate)

tf_Y_pre=tf.contrib.layers.fully_connected(h3_3, output_node_num, activation_fn=None)

loss=tf.reduce_mean(tf.square(tf_Y-tf_Y_pre))

update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
with tf.control_dependencies(update_ops):
    train_step = tf.train.AdamOptimizer(1e-4).minimize(loss)

with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())

    for i1 in range(3000*num_batch):
        train_feature=... # …
Run Code Online (Sandbox Code Playgroud)

python deep-learning tensorflow batch-normalization

3
推荐指数
2
解决办法
5609
查看次数