尚未在model.summary()上建立此模型错误

bac*_*chr 6 python keras tensorflow tensorflow2.0

我将keras模型定义如下

class ConvLayer(Layer) :
    def __init__(self, nf, ks=3, s=2, **kwargs):
        self.nf = nf
        self.grelu = GeneralReLU(leak=0.01)
        self.conv = (Conv2D(filters     = nf,
                            kernel_size = ks,
                            strides     = s,
                            padding     = "same",
                            use_bias    = False,
                            activation  = "linear"))
        super(ConvLayer, self).__init__(**kwargs)

    def rsub(self): return -self.grelu.sub
    def set_sub(self, v): self.grelu.sub = -v
    def conv_weights(self): return self.conv.weight[0]

    def build(self, input_shape):
        # No weight to train.
        super(ConvLayer, self).build(input_shape)  # Be sure to call this at the end

    def compute_output_shape(self, input_shape):
        output_shape = (input_shape[0],
                        input_shape[1]/2,
                        input_shape[2]/2,
                        self.nf)
        return output_shape

    def call(self, x):
        return self.grelu(self.conv(x))

    def __repr__(self):
        return f'ConvLayer(nf={self.nf}, activation={self.grelu})'
Run Code Online (Sandbox Code Playgroud)
class ConvModel(tf.keras.Model):
    def __init__(self, nfs, input_shape, output_shape, use_bn=False, use_dp=False):
        super(ConvModel, self).__init__(name='mlp')
        self.use_bn = use_bn
        self.use_dp = use_dp
        self.num_classes = num_classes

        # backbone layers
        self.convs = [ConvLayer(nfs[0], s=1, input_shape=input_shape)]
        self.convs += [ConvLayer(nf) for nf in nfs[1:]]
        # classification layers
        self.convs.append(AveragePooling2D())
        self.convs.append(Dense(output_shape, activation='softmax'))

    def call(self, inputs):
        for layer in self.convs: inputs = layer(inputs)
        return inputs
Run Code Online (Sandbox Code Playgroud)

我可以毫无问题地编译此模型

>>> model.compile(optimizer=tf.keras.optimizers.Adam(lr=lr), 
              loss='categorical_crossentropy',
              metrics=['accuracy'])
Run Code Online (Sandbox Code Playgroud)

但是当我查询该模型的摘要时,会看到此错误

>>> model = ConvModel(nfs, input_shape=(32, 32, 3), output_shape=num_classes)
>>> model.summary()
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-220-5f15418b3570> in <module>()
----> 1 model.summary()

/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/network.py in summary(self, line_length, positions, print_fn)
   1575     """
   1576     if not self.built:
-> 1577       raise ValueError('This model has not yet been built. '
   1578                        'Build the model first by calling `build()` or calling '
   1579                        '`fit()` with some data, or specify '

ValueError: This model has not yet been built. Build the model first by calling `build()` or calling `fit()` with some data, or specify an `input_shape` argument in the first layer(s) for automatic build.
Run Code Online (Sandbox Code Playgroud)

我正在提供input_shape模型的第一层,为什么会引发此错误?

Vis*_*ati 15

keras 子类模型和其他 keras 模型(Sequential 和 Functional)有很大的区别。

顺序模型和功能模型是表示层的 DAG 的数据结构。简而言之,功能或顺序模型是通过像 LEGO 一样将一个层堆叠在一起而构建的静态层图。因此,当您向第一层提供 input_shape 时,这些(功能和顺序)模型可以推断所有其他层的形状并构建模型。然后您可以使用 model.summary() 打印输入/输出形状。

另一方面,子类模型是通过 Python 代码的主体(调用方法)定义的。对于子类模型,这里没有层图。我们无法知道层是如何相互连接的(因为它是在调用主体中定义的,而不是作为显式数据结构),因此我们无法推断输入/输出形状。因此,对于子类模型,输入/输出形状对我们来说是未知的,直到它首先用正确的数据进行测试。在 compile() 方法中,我们将进行延迟编译并等待正确的数据。为了让它推断中间层的形状,我们需要使用适当的数据运行,然后使用 model.summary()。如果没有使用数据运行模型,它会抛出您注意到的错误。请查看GitHub 要点以获取完整代码。

以下是来自 Tensorflow 网站的示例。

import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers

class ThreeLayerMLP(keras.Model):

  def __init__(self, name=None):
    super(ThreeLayerMLP, self).__init__(name=name)
    self.dense_1 = layers.Dense(64, activation='relu', name='dense_1')
    self.dense_2 = layers.Dense(64, activation='relu', name='dense_2')
    self.pred_layer = layers.Dense(10, name='predictions')

  def call(self, inputs):
    x = self.dense_1(inputs)
    x = self.dense_2(x)
    return self.pred_layer(x)

def get_model():
  return ThreeLayerMLP(name='3_layer_mlp')

model = get_model()

(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()
x_train = x_train.reshape(60000, 784).astype('float32') / 255
x_test = x_test.reshape(10000, 784).astype('float32') / 255

model.compile(loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
              optimizer=keras.optimizers.RMSprop())

model.summary() # This will throw an error as follows
# ValueError: This model has not yet been built. Build the model first by calling `build()` or calling `fit()` with some data, or specify an `input_shape` argument in the first layer(s) for automatic build.

# Need to run with real data to infer shape of different layers
history = model.fit(x_train, y_train,
                    batch_size=64,
                    epochs=1)

model.summary()
Run Code Online (Sandbox Code Playgroud)

谢谢!


Vla*_*lad 8

该错误表明该怎么办:

该模型尚未构建。首先通过调用建立模型build()

model.build(input_shape) # `input_shape` is the shape of the input data
                         # e.g. input_shape = (None, 32, 32, 3)
model.summary()
Run Code Online (Sandbox Code Playgroud)

  • 如果在开头添加InputLayer,或者应用输入数据model(input_data),则无需显式构建Sequential()模型。在这两种情况下,都会隐式调用`model.build()。乐意效劳。 (3认同)

小智 8

另一种方法是添加这样的属性input_shape()

model = Sequential()
model.add(Bidirectional(LSTM(n_hidden,return_sequences=False, dropout=0.25, 
recurrent_dropout=0.1),input_shape=(n_steps,dim_input)))
Run Code Online (Sandbox Code Playgroud)

  • @chikitin:您必须确保将“input_shape”添加到“双向”括号中,而不是“LSTM”的括号中。在格式中很难看到出价。 (3认同)