TensorFlow:当is_training = False时,Batch Norm会中断网络

Cor*_*zin 6 tensorflow

我试图使用TensorFlow-Slim的批量规范层,如下所示:

net = ...
net = slim.batch_norm(net, scale = True, is_training = self.isTraining,
    updates_collections = None, decay = 0.9)
net = tf.nn.relu(net)
net = ...
Run Code Online (Sandbox Code Playgroud)

我训练:

self.optimizer = slim.learning.create_train_op(self.model.loss,
    tf.train.MomentumOptimizer(learning_rate = self.learningRate,
    momentum = 0.9, use_nesterov = True)

optimizer = self.sess.run([self.optimizer],
    feed_dict={self.model.isTraining:True})
Run Code Online (Sandbox Code Playgroud)

我用以下方法加载保存的权重:

net = model.Model(sess,width,height,channels,weightDecay)

savedWeightsDir = './savedWeights/'
saver = tf.train.Saver(max_to_keep = 5)
checkpointStr = tf.train.latest_checkpoint(savedWeightsDir)
sess.run(tf.global_variables_initializer())
saver.restore(sess, checkpointStr)
global_step = tf.contrib.framework.get_or_create_global_step()
Run Code Online (Sandbox Code Playgroud)

我推断:

inf = self.sess.run([self.softmax],
    feed_dict = {self.imageBatch:imageBatch,self.isTraining:False})
Run Code Online (Sandbox Code Playgroud)

当然,我遗漏了很多并解释了一些代码,但我认为这就是批量规范所触及的.奇怪的是,如果我设置isTraining:True,我会得到更好的结果.可能是加载权重的东西 - 也许批量标准值没有保存?代码中有什么明显的错误吗?谢谢.

Iva*_*aev 0

我刚刚遇到了同样的问题并在这里找到了解决方案。问题源于tf.layers.batch_normalization需要更新moving_meanmoving_variance的层。

为了在您的情况下正确执行此操作,您需要将培训过程修改为:

update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
with tf.control_dependencies(update_ops):
    self.optimizer = slim.learning.create_train_op(self.model.loss,
      tf.train.MomentumOptimizer(learning_rate = self.learningRate,
      momentum = 0.9, use_nesterov = True)
Run Code Online (Sandbox Code Playgroud)

或者更一般地说,来自文档

  x_norm = tf.layers.batch_normalization(x, training=training)

  # ...

  update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
  with tf.control_dependencies(update_ops):
    train_op = optimizer.minimize(loss)
Run Code Online (Sandbox Code Playgroud)