使用 Keras API 在 Tensorflow 2.0 中的多个 GPU 上加载模型后如何继续训练?

Ris*_*wat 5 multiple-gpu text-classification tensorflow tensorflow2.0

我使用 Keras API 在 Tensorflow 2.0 中训练了一个包含 RNN 的文本分类模型。我使用tf.distribute.MirroredStrategy()from here在多个 GPU(2) 上训练了这个模型。我tf.keras.callbacks.ModelCheckpoint('file_name.h5')在每个纪元之后使用保存了模型的检查点。现在,我想继续训练,从我保存的最后一个检查点开始使用相同数量的 GPU。tf.distribute.MirroredStrategy()像这样在里面加载检查点后 -

mirrored_strategy = tf.distribute.MirroredStrategy()
with mirrored_strategy.scope():
   model =tf.keras.models.load_model('file_name.h5')
Run Code Online (Sandbox Code Playgroud)

,它抛出以下错误。

File "model_with_tfsplit.py", line 94, in <module>
    model =tf.keras.models.load_model('TF_model_onfull_2_03.h5') # Loading for retraining
  File "/home/rishabh/.local/lib/python2.7/site-packages/tensorflow_core/python/keras/saving/save.py", line 138, in load_model
    return hdf5_format.load_model_from_hdf5(filepath, custom_objects, compile)
  File "/home/rishabh/.local/lib/python2.7/site-packages/tensorflow_core/python/keras/saving/hdf5_format.py", line 187, in load_model_from_hdf5
    model._make_train_function()
  File "/home/rishabh/.local/lib/python2.7/site-packages/tensorflow_core/python/keras/engine/training.py", line 2015, in _make_train_function
    params=self._collected_trainable_weights, loss=self.total_loss)
  File "/home/rishabh/.local/lib/python2.7/site-packages/tensorflow_core/python/keras/optimizer_v2/optimizer_v2.py", line 500, in get_updates
    grads = self.get_gradients(loss, params)
  File "/home/rishabh/.local/lib/python2.7/site-packages/tensorflow_core/python/keras/optimizer_v2/optimizer_v2.py", line 391, in get_gradients
    grads = gradients.gradients(loss, params)
  File "/home/rishabh/.local/lib/python2.7/site-packages/tensorflow_core/python/ops/gradients_impl.py", line 158, in gradients
    unconnected_gradients)
  File "/home/rishabh/.local/lib/python2.7/site-packages/tensorflow_core/python/ops/gradients_util.py", line 541, in _GradientsHelper
    for x in xs
  File "/home/rishabh/.local/lib/python2.7/site-packages/tensorflow_core/python/distribute/values.py", line 716, in handle
    raise ValueError("`handle` is not available outside the replica context"
ValueError: `handle` is not available outside the replica context or a `tf.distribute.Strategy.update()` call
Run Code Online (Sandbox Code Playgroud)

现在我不确定问题出在哪里。另外,如果我不使用这种镜像策略来使用多个 GPU,那么训练会从头开始,但经过几步后,它会达到与保存模型之前相同的精度和损失值。虽然不确定这种行为是否正常。

谢谢你!沙哈拉瓦

Sri*_*adi 1

在分布式范围下创建模型,然后使用load_weights方法。在此示例中get_model返回一个实例tf.keras.Model

def get_model():
    ...
    return model

mirrored_strategy = tf.distribute.MirroredStrategy()
with mirrored_strategy.scope():
    model = get_model()
    model.load_weights('file_name.h5')
    model.compile(...)
model.fit(...)
Run Code Online (Sandbox Code Playgroud)