Mar*_*dam 12 python lstm keras recurrent-neural-network
我在Keras中有以下代码(基本上我正在修改此代码以供我使用)并且我收到此错误:
'ValueError:检查目标时出错:预期conv3d_3有5个维度,但得到的数组有形状(10,4096)'
码:
from keras.models import Sequential
from keras.layers.convolutional import Conv3D
from keras.layers.convolutional_recurrent import ConvLSTM2D
from keras.layers.normalization import BatchNormalization
import numpy as np
import pylab as plt
from keras import layers
# We create a layer which take as input movies of shape
# (n_frames, width, height, channels) and returns a movie
# of identical shape.
model = Sequential()
model.add(ConvLSTM2D(filters=40, kernel_size=(3, 3),
input_shape=(None, 64, 64, 1),
padding='same', return_sequences=True))
model.add(BatchNormalization())
model.add(ConvLSTM2D(filters=40, kernel_size=(3, 3),
padding='same', return_sequences=True))
model.add(BatchNormalization())
model.add(ConvLSTM2D(filters=40, kernel_size=(3, 3),
padding='same', return_sequences=True))
model.add(BatchNormalization())
model.add(ConvLSTM2D(filters=40, kernel_size=(3, 3),
padding='same', return_sequences=True))
model.add(BatchNormalization())
model.add(Conv3D(filters=1, kernel_size=(3, 3, 3),
activation='sigmoid',
padding='same', data_format='channels_last'))
model.compile(loss='binary_crossentropy', optimizer='adadelta')
Run Code Online (Sandbox Code Playgroud)
我提供的数据采用以下格式:[1,10,64,64,1].所以我想知道我哪里错了,以及如何查看每一层的output_shape.
umu*_*tto 17
您可以通过获取图层的输出形状layer.output_shape.
for layer in model.layers:
print(layer.output_shape)
Run Code Online (Sandbox Code Playgroud)
给你:
(None, None, 64, 64, 40)
(None, None, 64, 64, 40)
(None, None, 64, 64, 40)
(None, None, 64, 64, 40)
(None, None, 64, 64, 40)
(None, None, 64, 64, 40)
(None, None, 64, 64, 40)
(None, None, 64, 64, 40)
(None, None, 64, 64, 1)
Run Code Online (Sandbox Code Playgroud)
或者,您可以使用model.summary以下方法打印模型:
model.summary()
Run Code Online (Sandbox Code Playgroud)
以漂亮的格式为您提供有关每个图层的参数数量和输出形状以及整体模型结构的详细信息:
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv_lst_m2d_1 (ConvLSTM2D) (None, None, 64, 64, 40) 59200
_________________________________________________________________
batch_normalization_1 (Batch (None, None, 64, 64, 40) 160
_________________________________________________________________
conv_lst_m2d_2 (ConvLSTM2D) (None, None, 64, 64, 40) 115360
_________________________________________________________________
batch_normalization_2 (Batch (None, None, 64, 64, 40) 160
_________________________________________________________________
conv_lst_m2d_3 (ConvLSTM2D) (None, None, 64, 64, 40) 115360
_________________________________________________________________
batch_normalization_3 (Batch (None, None, 64, 64, 40) 160
_________________________________________________________________
conv_lst_m2d_4 (ConvLSTM2D) (None, None, 64, 64, 40) 115360
_________________________________________________________________
batch_normalization_4 (Batch (None, None, 64, 64, 40) 160
_________________________________________________________________
conv3d_1 (Conv3D) (None, None, 64, 64, 1) 1081
=================================================================
Total params: 407,001
Trainable params: 406,681
Non-trainable params: 320
_________________________________________________________________
Run Code Online (Sandbox Code Playgroud)
如果只想访问有关特定图层的信息,可以name在构造该图层时使用参数,然后像这样调用:
...
model.add(ConvLSTM2D(..., name='conv3d_0'))
...
model.get_layer('conv3d_0')
Run Code Online (Sandbox Code Playgroud)
编辑:为了参考,它将始终相同layer.output_shape,请不要实际使用Lambda或自定义图层.但是你可以使用Lambda图层来回显传递张量的形状.
...
def print_tensor_shape(x):
print(x.shape)
return x
model.add(Lambda(print_tensor_shape))
...
Run Code Online (Sandbox Code Playgroud)
或者编写自定义图层并打印张量的形状call().
class echo_layer(Layer):
...
def call(self, x):
print(x.shape)
return x
...
model.add(echo_layer())
Run Code Online (Sandbox Code Playgroud)
| 归档时间: |
|
| 查看次数: |
16813 次 |
| 最近记录: |