检查点keras模型:TypeError:无法pickle _thread.lock对象

Jes*_*ess 7 pickle keras tensorflow

这似乎是发生在不同的情况下,过去的错误在这里,但我不能直接倾倒模式-我使用的是ModelCheckpoint回调.什么可能出错?

信息:

  • Keras版本2.0.8
  • Tensorflow版本1.3.0
  • Python 3.6

重现错误的最小示例:

from keras.layers import Input, Lambda, Dense
from keras.models import Model
from keras.callbacks import ModelCheckpoint
from keras.optimizers import Adam
import tensorflow as tf
import numpy as np

x = Input(shape=(30,3))
low = tf.constant(np.random.rand(30, 3).astype('float32'))
high = tf.constant(1 + np.random.rand(30, 3).astype('float32'))
clipped_out_position = Lambda(lambda x, low, high: tf.clip_by_value(x, low, high),
                                      arguments={'low': low, 'high': high})(x)

model = Model(inputs=x, outputs=[clipped_out_position])
optimizer = Adam(lr=.1)
model.compile(optimizer=optimizer, loss="mean_squared_error")
checkpoint = ModelCheckpoint("debug.hdf", monitor="val_loss", verbose=1, save_best_only=True, mode="min")
training_callbacks = [checkpoint]
model.fit(np.random.rand(100, 30, 3), [np.random.rand(100, 30, 3)], callbacks=training_callbacks, epochs=50, batch_size=10, validation_split=0.33)
Run Code Online (Sandbox Code Playgroud)

错误输出:

Train on 67 samples, validate on 33 samples
Epoch 1/50
10/67 [===>..........................] - ETA: 0s - loss: 0.1627Epoch 00001: val_loss improved from inf to 0.17002, saving model to debug.hdf
Traceback (most recent call last):
  File "debug_multitask_inverter.py", line 19, in <module>
    model.fit(np.random.rand(100, 30, 3), [np.random.rand(100, 30, 3)], callbacks=training_callbacks, epochs=50, batch_size=10, validation_split=0.33)
  File "/om/user/lnj/openmind_env/tensorflow-gpu/lib/python3.6/site-packages/keras/engine/training.py", line 1631, in fit

?
    validation_steps=validation_steps)
  File "/om/user/lnj/openmind_env/tensorflow-gpu/lib/python3.6/site-packages/keras/engine/training.py", line 1233, in _fit_loop
    callbacks.on_epoch_end(epoch, epoch_logs)
  File "/om/user/lnj/openmind_env/tensorflow-gpu/lib/python3.6/site-packages/keras/callbacks.py", line 73, in on_epoch_end
    callback.on_epoch_end(epoch, logs)
  File "/om/user/lnj/openmind_env/tensorflow-gpu/lib/python3.6/site-packages/keras/callbacks.py", line 414, in on_epoch_end
    self.model.save(filepath, overwrite=True)
  File "/om/user/lnj/openmind_env/tensorflow-gpu/lib/python3.6/site-packages/keras/engine/topology.py", line 2556, in save
    save_model(self, filepath, overwrite, include_optimizer)
  File "/om/user/lnj/openmind_env/tensorflow-gpu/lib/python3.6/site-packages/keras/models.py", line 107, in save_model
    'config': model.get_config()
  File "/om/user/lnj/openmind_env/tensorflow-gpu/lib/python3.6/site-packages/keras/engine/topology.py", line 2397, in get_config
    return copy.deepcopy(config)
  File "/om/user/lnj/openmind_env/tensorflow-gpu/lib/python3.6/copy.py", line 150, in deepcopy
    y = copier(x, memo)
  File "/om/user/lnj/openmind_env/tensorflow-gpu/lib/python3.6/copy.py", line 240, in _deepcopy_dict
    y[deepcopy(key, memo)] = deepcopy(value, memo)
  File "/om/user/lnj/openmind_env/tensorflow-gpu/lib/python3.6/copy.py", line 150, in deepcopy
    y = copier(x, memo)
  File "/om/user/lnj/openmind_env/tensorflow-gpu/lib/python3.6/copy.py", line 215, in _deepcopy_list
    append(deepcopy(a, memo))
  File "/om/user/lnj/openmind_env/tensorflow-gpu/lib/python3.6/copy.py", line 150, in deepcopy
    y = copier(x, memo)
  File "/om/user/lnj/openmind_env/tensorflow-gpu/lib/python3.6/copy.py", line 240, in _deepcopy_dict
    y[deepcopy(key, memo)] = deepcopy(value, memo)
  File "/om/user/lnj/openmind_env/tensorflow-gpu/lib/python3.6/copy.py", line 150, in deepcopy
    y = copier(x, memo)
  File "/om/user/lnj/openmind_env/tensorflow-gpu/lib/python3.6/copy.py", line 240, in _deepcopy_dict
    y[deepcopy(key, memo)] = deepcopy(value, memo)
  File "/om/user/lnj/openmind_env/tensorflow-gpu/lib/python3.6/copy.py", line 150, in deepcopy
    y = copier(x, memo)
  File "/om/user/lnj/openmind_env/tensorflow-gpu/lib/python3.6/copy.py", line 240, in _deepcopy_dict
    y[deepcopy(key, memo)] = deepcopy(value, memo)
  File "/om/user/lnj/openmind_env/tensorflow-gpu/lib/python3.6/copy.py", line 180, in deepcopy
    y = _reconstruct(x, memo, *rv)
  File "/om/user/lnj/openmind_env/tensorflow-gpu/lib/python3.6/copy.py", line 280, in _reconstruct
    state = deepcopy(state, memo)
  File "/om/user/lnj/openmind_env/tensorflow-gpu/lib/python3.6/copy.py", line 150, in deepcopy
    y = copier(x, memo)
  File "/om/user/lnj/openmind_env/tensorflow-gpu/lib/python3.6/copy.py", line 240, in _deepcopy_dict
    y[deepcopy(key, memo)] = deepcopy(value, memo)
  File "/om/user/lnj/openmind_env/tensorflow-gpu/lib/python3.6/copy.py", line 180, in deepcopy
    y = _reconstruct(x, memo, *rv)
  File "/om/user/lnj/openmind_env/tensorflow-gpu/lib/python3.6/copy.py", line 280, in _reconstruct
    state = deepcopy(state, memo)
  File "/om/user/lnj/openmind_env/tensorflow-gpu/lib/python3.6/copy.py", line 150, in deepcopy
    y = copier(x, memo)
  File "/om/user/lnj/openmind_env/tensorflow-gpu/lib/python3.6/copy.py", line 240, in _deepcopy_dict
    y[deepcopy(key, memo)] = deepcopy(value, memo)
  File "/om/user/lnj/openmind_env/tensorflow-gpu/lib/python3.6/copy.py", line 180, in deepcopy
    y = _reconstruct(x, memo, *rv)
  File "/om/user/lnj/openmind_env/tensorflow-gpu/lib/python3.6/copy.py", line 280, in _reconstruct
    state = deepcopy(state, memo)
  File "/om/user/lnj/openmind_env/tensorflow-gpu/lib/python3.6/copy.py", line 150, in deepcopy
    y = copier(x, memo)
  File "/om/user/lnj/openmind_env/tensorflow-gpu/lib/python3.6/copy.py", line 240, in _deepcopy_dict
    y[deepcopy(key, memo)] = deepcopy(value, memo)
  File "/om/user/lnj/openmind_env/tensorflow-gpu/lib/python3.6/copy.py", line 169, in deepcopy
    rv = reductor(4)
TypeError: can't pickle _thread.lock objects
Run Code Online (Sandbox Code Playgroud)

Yu-*_*ang 13

保存Lambda图层时,arguments也会保存传入的图像.在这种情况下,它包含两个tf.Tensors.似乎Keras现在不支持tf.Tensor在模型配置中进行序列化.

但是,numpy数组可以毫无问题地序列化.因此,而不是掠过tf.Tensorarguments,你可以通过在numpy的阵列,并将它们转换成tf.TensorS IN的lambda函数.

x = Input(shape=(30,3))
low = np.random.rand(30, 3)
high = 1 + np.random.rand(30, 3)
clipped_out_position = Lambda(lambda x, low, high: tf.clip_by_value(x, tf.constant(low, dtype='float32'), tf.constant(high, dtype='float32')),
                              arguments={'low': low, 'high': high})(x)
Run Code Online (Sandbox Code Playgroud)

上面这些行的一个问题是,在尝试加载此模型时,您可能会看到一个NameError: name 'tf' is not defined.那是因为TensorFlow没有导入Lambda重建图层的文件(core.py).

更改tfK.tf可以解决这个问题.您也可以替换tf.constant()通过K.constant(),这蒙上lowhigh成FLOAT32张量自动.

from keras import backend as K
x = Input(shape=(30,3))
low = np.random.rand(30, 3)
high = 1 + np.random.rand(30, 3)
clipped_out_position = Lambda(lambda x, low, high: K.tf.clip_by_value(x, K.constant(low), K.constant(high)),
                              arguments={'low': low, 'high': high})(x)
Run Code Online (Sandbox Code Playgroud)


Ala*_*off 6

澄清一下:这不是Keras 无法腌制张量的问题(其他可能的情况,请参阅在 Lambda 层中下文)的问题,而是尝试序列化 python 函数的参数(此处:lambda 函数)独立于函数(此处:在 lambda 函数本身的上下文之外)。这适用于“静态”参数,否则会失败。为了规避它,应该将非静态函数参数包装在另一个函数中。

这里有几个解决方法:


  1. 使用静态变量,例如 python/numpy-variables(只是上面提到的):
low = np.random.rand(30, 3)
high = 1 + np.random.rand(30, 3)

x = Input(shape=(30,3))
clipped_out_position = Lambda(lambda x: tf.clip_by_value(x, low, high))(x)
Run Code Online (Sandbox Code Playgroud)
  1. 使用functools.partial来包装你的λ-功能:
import functools

clip_by_value = functools.partial(
   tf.clip_by_value,
   clip_value_min=low,
   clip_value_max=high)

x = Input(shape=(30,3))
clipped_out_position = Lambda(lambda x: clip_by_value(x))(x)
Run Code Online (Sandbox Code Playgroud)
  1. 使用闭包来包装你的 lambda 函数:
low = tf.constant(np.random.rand(30, 3).astype('float32'))
high = tf.constant(1 + np.random.rand(30, 3).astype('float32'))

def clip_by_value(t):
    return tf.clip_by_value(t, low, high)

x = Input(shape=(30,3))
clipped_out_position = Lambda(lambda x: clip_by_value(x))(x)
Run Code Online (Sandbox Code Playgroud)

注意:尽管有时您可以放弃创建显式lambda-function 并使用更简洁的代码片段:

clipped_out_position = Lambda(clip_by_value)(x)
Run Code Online (Sandbox Code Playgroud)

对函数参数进行“深拷贝”时,缺少 lambda 函数的额外包装层(即lambda t: clip_by_value(t)可能仍会导致相同的问题应避免


  1. 最后,您可以将模型逻辑包装到一个单独的 Keras 层中,在这种特殊情况下,它可能看起来有点过度设计:
x = Input(shape=(30,3))
low = Lambda(lambda t: tf.constant(np.random.rand(30, 3).astype('float32')))(x)
high = Lambda(lambda t: tf.constant(1 + np.random.rand(30, 3).astype('float32')))(x)
clipped_out_position = Lambda(lambda x: tf.clip_by_value(*x))((x, low, high))
Run Code Online (Sandbox Code Playgroud)

注意 tf.clip_by_value(*x) 最后一个 Lambda 层中的 只是一个解包的参数元组,它也可以用更详细的形式来 tf.clip_by_value(x[0], x[1], x[2]) 代替。


下面)作为旁注:在这种情况下,您的 lambda 函数试图捕获(一部分)类实例也会破坏序列化(由于后期绑定):

class MyModel:
    def __init__(self):
        self.low = np.random.rand(30, 3)
        self.high = 1 + np.random.rand(30, 3)

    def run(self):
        x = Input(shape=(30,3))
        clipped_out_position = Lambda(lambda x: tf.clip_by_value(x, self.low, self.high))(x)
        model = Model(inputs=x, outputs=[clipped_out_position])
        optimizer = Adam(lr=.1)
        model.compile(optimizer=optimizer, loss="mean_squared_error")
        checkpoint = ModelCheckpoint("debug.hdf", monitor="val_loss", verbose=1, save_best_only=True, mode="min")
        training_callbacks = [checkpoint]
        model.fit(np.random.rand(100, 30, 3), 
                 [np.random.rand(100, 30, 3)], callbacks=training_callbacks, epochs=50, batch_size=10, validation_split=0.33)

MyModel().run()
Run Code Online (Sandbox Code Playgroud)

这可以通过这个默认参数技巧确保早期绑定来解决:

        (...)
        clipped_out_position = Lambda(lambda x, l=self.low, h=self.high: tf.clip_by_value(x, l, h))(x)
        (...)
Run Code Online (Sandbox Code Playgroud)