Iva*_*van 6 python keras tensorflow batch-normalization
在这里提出了一个类似的未解决的问题。我正在测试一种深度增强学习算法,该算法在张量流中使用keras后端。我对tf.keras不太熟悉,不过我想添加批处理规范化层。因此,我尝试使用tf.keras.layers.BatchNormalization(),但是由于update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)为空,它不会更新平均均值和方差。
使用常规tf.layers.batch_normalization似乎很好。但是,由于完整的算法有些复杂,因此我需要找到一种使用方法tf.keras。
由于不为空,因此标准tf层将batch_normed = tf.layers.batch_normalization(hidden, training=True)更新平均值update_ops:
[
<tf.Operation 'batch_normalization/AssignMovingAvg' type=AssignSub>,
<tf.Operation 'batch_normalization/AssignMovingAvg_1' type=AssignSub>,
<tf.Operation 'batch_normalization_1/AssignMovingAvg' type=AssignSub>,
<tf.Operation 'batch_normalization_1/AssignMovingAvg_1' type=AssignSub>
]
Run Code Online (Sandbox Code Playgroud)
无效的最小示例:
import tensorflow as tf
import numpy as np
tf.reset_default_graph()
graph = tf.get_default_graph()
tf.keras.backend.set_learning_phase(True)
input_shapes = [(3, )]
hidden_layer_sizes = [16, 16]
inputs = [
tf.keras.layers.Input(shape=input_shape)
for input_shape in input_shapes
]
concatenated = tf.keras.layers.Lambda(
lambda x: tf.concat(x, axis=-1)
)(inputs)
out = concatenated
for units in hidden_layer_sizes:
hidden = tf.keras.layers.Dense(
units, activation=None
)(out)
batch_normed = tf.keras.layers.BatchNormalization()(hidden, training=True)
#batch_normed = tf.layers.batch_normalization(hidden, training=True)
out = tf.keras.layers.Activation('relu')(batch_normed)
out = tf.keras.layers.Dense(
units=1, activation='linear'
)(out)
data = np.random.rand(100,3)
with tf.Session(graph=graph) as sess:
sess.run(tf.global_variables_initializer())
for i in range(10):
update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
sess.run(update_ops, {inputs[0]: data})
sess.run(out, {inputs[0]: data})
variables = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES,
scope='batch_normalization')
bn_gamma, bn_beta, bn_moving_mean, bn_moving_variance = [], [], [], []
for variable in variables:
val = sess.run(variable)
nv = np.linalg.norm(val)
if 'gamma' in variable.name:
bn_gamma.append(nv)
if 'beta' in variable.name:
bn_beta.append(nv)
if 'moving_mean' in variable.name:
bn_moving_mean.append(nv)
if 'moving_variance' in variable.name:
bn_moving_variance.append(nv)
diagnostics = {
'bn_Q_gamma': np.mean(bn_gamma),
'bn_Q_beta': np.mean(bn_beta),
'bn_Q_moving_mean': np.mean(bn_moving_mean),
'bn_Q_moving_variance': np.mean(bn_moving_variance),
}
print(diagnostics)
Run Code Online (Sandbox Code Playgroud)
输出如下(您可以看到moving_mean和moving_variance保持不变):
{'bn_Q_gamma': 4.0, 'bn_Q_beta': 0.0, 'bn_Q_moving_mean': 0.0, 'bn_Q_moving_variance': 4.0}
{'bn_Q_gamma': 4.0, 'bn_Q_beta': 0.0, 'bn_Q_moving_mean': 0.0, 'bn_Q_moving_variance': 4.0}
{'bn_Q_gamma': 4.0, 'bn_Q_beta': 0.0, 'bn_Q_moving_mean': 0.0, 'bn_Q_moving_variance': 4.0}
{'bn_Q_gamma': 4.0, 'bn_Q_beta': 0.0, 'bn_Q_moving_mean': 0.0, 'bn_Q_moving_variance': 4.0}
{'bn_Q_gamma': 4.0, 'bn_Q_beta': 0.0, 'bn_Q_moving_mean': 0.0, 'bn_Q_moving_variance': 4.0}
{'bn_Q_gamma': 4.0, 'bn_Q_beta': 0.0, 'bn_Q_moving_mean': 0.0, 'bn_Q_moving_variance': 4.0}
{'bn_Q_gamma': 4.0, 'bn_Q_beta': 0.0, 'bn_Q_moving_mean': 0.0, 'bn_Q_moving_variance': 4.0}
{'bn_Q_gamma': 4.0, 'bn_Q_beta': 0.0, 'bn_Q_moving_mean': 0.0, 'bn_Q_moving_variance': 4.0}
{'bn_Q_gamma': 4.0, 'bn_Q_beta': 0.0, 'bn_Q_moving_mean': 0.0, 'bn_Q_moving_variance': 4.0}
{'bn_Q_gamma': 4.0, 'bn_Q_beta': 0.0, 'bn_Q_moving_mean': 0.0, 'bn_Q_moving_variance': 4.0}
Run Code Online (Sandbox Code Playgroud)
虽然预期的输出如下所示(对使用batch_normed演算的行进行tf.keras注释,然后取消对下面的行的注释):
{'bn_Q_gamma': 4.0, 'bn_Q_beta': 0.0, 'bn_Q_moving_mean': 0.0148749575, 'bn_Q_moving_variance': 3.966927}
{'bn_Q_gamma': 4.0, 'bn_Q_beta': 0.0, 'bn_Q_moving_mean': 0.029601166, 'bn_Q_moving_variance': 3.934192}
{'bn_Q_gamma': 4.0, 'bn_Q_beta': 0.0, 'bn_Q_moving_mean': 0.04418011, 'bn_Q_moving_variance': 3.9017918}
{'bn_Q_gamma': 4.0, 'bn_Q_beta': 0.0, 'bn_Q_moving_mean': 0.05861327, 'bn_Q_moving_variance': 3.8697228}
{'bn_Q_gamma': 4.0, 'bn_Q_beta': 0.0, 'bn_Q_moving_mean': 0.0729021, 'bn_Q_moving_variance': 3.8379822}
{'bn_Q_gamma': 4.0, 'bn_Q_beta': 0.0, 'bn_Q_moving_mean': 0.08704803, 'bn_Q_moving_variance': 3.8065662}
{'bn_Q_gamma': 4.0, 'bn_Q_beta': 0.0, 'bn_Q_moving_mean': 0.10105251, 'bn_Q_moving_variance': 3.7754717}
{'bn_Q_gamma': 4.0, 'bn_Q_beta': 0.0, 'bn_Q_moving_mean': 0.11491694, 'bn_Q_moving_variance': 3.7446957}
{'bn_Q_gamma': 4.0, 'bn_Q_beta': 0.0, 'bn_Q_moving_mean': 0.12864274, 'bn_Q_moving_variance': 3.7142346}
{'bn_Q_gamma': 4.0, 'bn_Q_beta': 0.0, 'bn_Q_moving_mean': 0.14223127, 'bn_Q_moving_variance': 3.6840856}
Run Code Online (Sandbox Code Playgroud)
即使有,仍然有些腥味tf.layers.batch_normalization。标准tf方法tf.control_dependencies:
with tf.control_dependencies(update_ops):
sess.run(out, {inputs[0]: data})
Run Code Online (Sandbox Code Playgroud)
我在上面的代码中代替了以下两行:
sess.run(update_ops, {inputs[0]: data})
sess.run(out, {inputs[0]: data})
Run Code Online (Sandbox Code Playgroud)
生产bn_Q_moving_mean = 0.0和bn_Q_moving_variance = 4.0
这是因为tf.keras.layers.BatchNormalization继承自tf.keras.layers.Layer。Keras API将更新操作作为其契合和评估循环的一部分。反过来,这意味着tf.GraphKeys.UPDATE_OPS没有它就不会更新集合。
因此,为了使其正常工作,您需要手动对其进行更新
hidden = tf.keras.layers.Dense(units, activation=None)(out)
batch_normed = tf.keras.layers.BatchNormalization(trainable=True)
layer = batch_normed(hidden)
Run Code Online (Sandbox Code Playgroud)
这将创建单独的类实例
tf.add_to_collection(tf.GraphKeys.UPDATE_OPS, batch_normed.updates)
Run Code Online (Sandbox Code Playgroud)
并且此更新需要收集。还看看https://github.com/tensorflow/tensorflow/issues/25525
| 归档时间: |
|
| 查看次数: |
2595 次 |
| 最近记录: |