我在tensorflow中尝试了几个版本的batch_normalization,但它们都没有工作!当我在推理时设置batch_size = 1时,结果都是错误的.
版本1:直接使用tensorflow.contrib中的官方版本
from tensorflow.contrib.layers.python.layers.layers import batch_norm
Run Code Online (Sandbox Code Playgroud)
使用这样:
output = lrelu(batch_norm(tf.nn.bias_add(conv, biases), is_training), 0.5, name=scope.name)
Run Code Online (Sandbox Code Playgroud)
is_training =训练时为真,推理时为假.
def batch_norm_layer(x, train_phase, scope_bn='bn'):
bn_train = batch_norm(x, decay=0.999, epsilon=1e-3, center=True, scale=True,
updates_collections=None,
is_training=True,
reuse=None, # is this right?
trainable=True,
scope=scope_bn)
bn_inference = batch_norm(x, decay=0.999, epsilon=1e-3, center=True, scale=True,
updates_collections=None,
is_training=False,
reuse=True, # is this right?
trainable=True,
scope=scope_bn)
z = tf.cond(train_phase, lambda: bn_train, lambda: bn_inference)
return z
Run Code Online (Sandbox Code Playgroud)
使用这样:
output = lrelu(batch_norm_layer(tf.nn.bias_add(conv, biases), is_training), 0.5, name=scope.name)
Run Code Online (Sandbox Code Playgroud)
is_training是培训时间的占位符,在推理时间是真假.
版本3:来自slim https://github.com/tensorflow/models/blob/master/inception/inception/slim/ops.py
def batch_norm_layer(inputs,
is_training=True,
scope='bn'): …Run Code Online (Sandbox Code Playgroud)