M.I*_*nat 7 python machine-learning deep-learning keras tensorflow
请对您的想法添加最少的评论,以便我可以改进我的查询。谢谢。:)
我正在尝试tf.keras使用梯度累积(GA)训练模型。但我不想在自定义训练循环中使用它(如),而是.fit()通过覆盖train_step. 来自定义方法。这可能吗?如何做到这一点?原因是,如果我们想获得keras诸如fit, 之类的内置功能的好处callbacks,我们不想使用自定义训练循环,但同时如果我们train_step出于某种原因(如 GA 或其他)想覆盖,我们可以自定义fit方法并仍然可以利用这些内置函数。
而且,我知道使用GA的优点,但使用它的主要缺点是什么?为什么它不是默认功能,而是框架的可选功能?
# overriding train step
# my attempt
# it's not appropriately implemented
# and need to fix
class CustomTrainStep(tf.keras.Model):
def __init__(self, n_gradients, *args, **kwargs):
super().__init__(*args, **kwargs)
self.n_gradients = n_gradients
self.gradient_accumulation = [tf.zeros_like(this_var) for this_var in \
self.trainable_variables]
def train_step(self, data):
x, y = data
batch_size = tf.cast(tf.shape(x)[0], tf.float32)
# Gradient Tape
with tf.GradientTape() as tape:
y_pred = self(x, training=True)
loss = self.compiled_loss(y, y_pred, regularization_losses=self.losses)
# Calculate batch gradients
gradients = tape.gradient(loss, self.trainable_variables)
# Accumulate batch gradients
accum_gradient = [(acum_grad+grad) for acum_grad, grad in \
zip(self.gradient_accumulation, gradients)]
accum_gradient = [this_grad/batch_size for this_grad in accum_gradient]
# apply accumulated gradients
self.optimizer.apply_gradients(zip(accum_gradient, self.trainable_variables))
# TODO: reset self.gradient_accumulation
# update metrics
self.compiled_metrics.update_state(y, y_pred)
return {m.name: m.result() for m in self.metrics}
Run Code Online (Sandbox Code Playgroud)
请运行并检查以下玩具设置。
# Model
size = 32
input = tf.keras.Input(shape=(size,size,3))
efnet = tf.keras.applications.DenseNet121(weights=None,
include_top = False,
input_tensor = input)
base_maps = tf.keras.layers.GlobalAveragePooling2D()(efnet.output)
base_maps = tf.keras.layers.Dense(units=10, activation='softmax',
name='primary')(base_maps)
custom_model = CustomTrainStep(n_gradients=10, inputs=[input], outputs=[base_maps])
# bind all
custom_model.compile(
loss = tf.keras.losses.CategoricalCrossentropy(),
metrics = ['accuracy'],
optimizer = tf.keras.optimizers.Adam() )
Run Code Online (Sandbox Code Playgroud)
# data
(x_train, y_train), (_, _) = tf.keras.datasets.mnist.load_data()
x_train = tf.expand_dims(x_train, -1)
x_train = tf.repeat(x_train, 3, axis=-1)
x_train = tf.divide(x_train, 255)
x_train = tf.image.resize(x_train, [size,size]) # if we want to resize
y_train = tf.one_hot(y_train , depth=10)
# customized fit
custom_model.fit(x_train, y_train, batch_size=64, epochs=3, verbose = 1)
Run Code Online (Sandbox Code Playgroud)
我发现其他一些人也试图实现这一目标,但最终遇到了同样的问题。这里有一些解决方法,但它太乱了,我认为应该有一些更好的方法。
是的,可以.fit()通过覆盖train_step没有自定义训练循环的方法来自定义该方法,下面的简单示例将向您展示如何使用梯度累积来训练一个简单的 mnist 分类器:
import tensorflow as tf
# overriding train step
# my attempt
# it's not appropriately implemented
# and need to fix
class CustomTrainStep(tf.keras.Model):
def __init__(self, n_gradients, *args, **kwargs):
super().__init__(*args, **kwargs)
self.n_gradients = tf.constant(n_gradients, dtype=tf.int32)
self.n_acum_step = tf.Variable(0, dtype=tf.int32, trainable=False)
self.gradient_accumulation = [tf.Variable(tf.zeros_like(v, dtype=tf.float32), trainable=False) for v in self.trainable_variables]
def train_step(self, data):
self.n_acum_step.assign_add(1)
x, y = data
# Gradient Tape
with tf.GradientTape() as tape:
y_pred = self(x, training=True)
loss = self.compiled_loss(y, y_pred, regularization_losses=self.losses)
# Calculate batch gradients
gradients = tape.gradient(loss, self.trainable_variables)
# Accumulate batch gradients
for i in range(len(self.gradient_accumulation)):
self.gradient_accumulation[i].assign_add(gradients[i])
# If n_acum_step reach the n_gradients then we apply accumulated gradients to update the variables otherwise do nothing
tf.cond(tf.equal(self.n_acum_step, self.n_gradients), self.apply_accu_gradients, lambda: None)
# update metrics
self.compiled_metrics.update_state(y, y_pred)
return {m.name: m.result() for m in self.metrics}
def apply_accu_gradients(self):
# apply accumulated gradients
self.optimizer.apply_gradients(zip(self.gradient_accumulation, self.trainable_variables))
# reset
self.n_acum_step.assign(0)
for i in range(len(self.gradient_accumulation)):
self.gradient_accumulation[i].assign(tf.zeros_like(self.trainable_variables[i], dtype=tf.float32))
# Model
input = tf.keras.Input(shape=(28, 28))
base_maps = tf.keras.layers.Flatten(input_shape=(28, 28))(input)
base_maps = tf.keras.layers.Dense(128, activation='relu')(base_maps)
base_maps = tf.keras.layers.Dense(units=10, activation='softmax', name='primary')(base_maps)
custom_model = CustomTrainStep(n_gradients=10, inputs=[input], outputs=[base_maps])
# bind all
custom_model.compile(
loss = tf.keras.losses.CategoricalCrossentropy(),
metrics = ['accuracy'],
optimizer = tf.keras.optimizers.Adam(learning_rate=1e-3) )
# data
(x_train, y_train), (_, _) = tf.keras.datasets.mnist.load_data()
x_train = tf.divide(x_train, 255)
y_train = tf.one_hot(y_train , depth=10)
# customized fit
custom_model.fit(x_train, y_train, batch_size=6, epochs=3, verbose = 1)
Run Code Online (Sandbox Code Playgroud)
输出:
Epoch 1/3
10000/10000 [==============================] - 13s 1ms/step - loss: 0.5053 - accuracy: 0.8584
Epoch 2/3
10000/10000 [==============================] - 13s 1ms/step - loss: 0.1389 - accuracy: 0.9600
Epoch 3/3
10000/10000 [==============================] - 13s 1ms/step - loss: 0.0898 - accuracy: 0.9748
Run Code Online (Sandbox Code Playgroud)
梯度累积是一种将用于训练神经网络的样本批次拆分为几个小批量样本的机制,这些样本将按顺序运行
因为 GA 在每个 mini-batch 之后计算损失和梯度,而不是更新模型参数,而是等待并累积连续批次的梯度,因此它可以克服内存限制,即使用较少的内存来训练模型,就像使用 large批量大小。
示例:如果您以 5 步和 4 张图像的批次大小运行梯度累积,它的作用与以 20 张图像的批次大小运行几乎相同。
我们还可以在使用 GA 时并行训练,即聚合来自多台机器的梯度。
这种技术的工作这么好,因此被广泛使用,有几件事情需要使用它之前认为我不认为它应该被称为利弊,毕竟,所有的GA不被转向4 + 4到2 + 2 + 2 + 2。
如果你的机器内存足够大的batch size就不用了,因为众所周知batch size过大泛化能力差,如果用GA肯定会跑得慢以达到您的机器内存已经可以处理的相同批量大小。
参考:
| 归档时间: |
|
| 查看次数: |
1001 次 |
| 最近记录: |