vet*_*yan 6 machine-learning lstm keras tensorflow
我建立了一个模型,该模型将时间序列的 3 个图像以及 5 个数字信息作为输入,并生成时间序列的下三个图像。我通过以下方式完成了这项工作:
LSTM 模型产生大小为 393216 (3x128x128x8) 的输出。现在我必须将表格模型的输出设置为 49,152,以便在下一层输入大小为 442368 (3x128x128x9)。因此,表格模型的 Dense 层的这种不必要的膨胀使得原本高效的 LSTM 模型表现得非常糟糕。
有没有更好的方法来连接两个模型?有没有办法在表格模型的 Dense 层中只输出 10?
该模型:
x_input = Input(shape=(None, 128, 128, 3))
x = ConvLSTM2D(32, 3, strides = 1, padding='same', dilation_rate = 2,return_sequences=True)(x_input)
x = BatchNormalization()(x)
x = ConvLSTM2D(16, 3, strides = 1, padding='same', dilation_rate = 2,return_sequences=True)(x)
x = BatchNormalization()(x)
x = ConvLSTM2D(8, 3, strides = 1, padding='same', dilation_rate = 2,return_sequences=True)(x)
x = BatchNormalization()(x)
x = Flatten()(x)
# x = MaxPooling3D()(x)
x_tab_input = Input(shape=(5))
x_tab = Dense(100, activation="relu")(x_tab_input)
x_tab = Dense(49152, activation="relu")(x_tab)
x_tab = Flatten()(x_tab)
concat = Concatenate()([x, x_tab])
output = Reshape((3,128,128,9))(concat)
output = Conv3D(filters=3, kernel_size=(3, 3, 3), activation='relu', padding="same")(output)
model = Model([x_input, x_tab_input], output)
model.compile(loss='mae', optimizer='rmsprop')
Run Code Online (Sandbox Code Playgroud)
型号概要:
Model: "functional_3"
______________________________________________________________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
======================================================================================================================================================
input_4 (InputLayer) [(None, None, 128, 128, 3)] 0
______________________________________________________________________________________________________________________________________________________
conv_lst_m2d_9 (ConvLSTM2D) (None, None, 128, 128, 32) 40448 input_4[0][0]
______________________________________________________________________________________________________________________________________________________
batch_normalization_9 (BatchNormalization) (None, None, 128, 128, 32) 128 conv_lst_m2d_9[0][0]
______________________________________________________________________________________________________________________________________________________
conv_lst_m2d_10 (ConvLSTM2D) (None, None, 128, 128, 16) 27712 batch_normalization_9[0][0]
______________________________________________________________________________________________________________________________________________________
batch_normalization_10 (BatchNormalization) (None, None, 128, 128, 16) 64 conv_lst_m2d_10[0][0]
______________________________________________________________________________________________________________________________________________________
input_5 (InputLayer) [(None, 5)] 0
______________________________________________________________________________________________________________________________________________________
conv_lst_m2d_11 (ConvLSTM2D) (None, None, 128, 128, 8) 6944 batch_normalization_10[0][0]
______________________________________________________________________________________________________________________________________________________
dense (Dense) (None, 100) 600 input_5[0][0]
______________________________________________________________________________________________________________________________________________________
batch_normalization_11 (BatchNormalization) (None, None, 128, 128, 8) 32 conv_lst_m2d_11[0][0]
______________________________________________________________________________________________________________________________________________________
dense_1 (Dense) (None, 49152) 4964352 dense[0][0]
______________________________________________________________________________________________________________________________________________________
flatten_3 (Flatten) (None, None) 0 batch_normalization_11[0][0]
______________________________________________________________________________________________________________________________________________________
flatten_4 (Flatten) (None, 49152) 0 dense_1[0][0]
______________________________________________________________________________________________________________________________________________________
concatenate (Concatenate) (None, None) 0 flatten_3[0][0]
flatten_4[0][0]
______________________________________________________________________________________________________________________________________________________
reshape_2 (Reshape) (None, 3, 128, 128, 9) 0 concatenate[0][0]
______________________________________________________________________________________________________________________________________________________
conv3d_2 (Conv3D) (None, 3, 128, 128, 3) 732 reshape_2[0][0]
======================================================================================================================================================
Total params: 5,041,012
Trainable params: 5,040,900
Non-trainable params: 112
______________________________________________________________________________________________________________________________________________________
Run Code Online (Sandbox Code Playgroud)
我同意你的观点,巨大的Dense层(具有数百万个参数)可能会阻碍模型的性能。您可以选择以下两种方法之一,而不是用图层来扩充表格数据。Dense
选项 1:平铺x_tab张量,使其符合您所需的形状。这可以通过以下步骤来实现:
首先,不需要展平ConvLSTM2D\ 的编码张量:
x_input = Input(shape=(3, 128, 128, 3))\nx = ConvLSTM2D(32, 3, strides = 1, padding=\'same\', dilation_rate = 2,return_sequences=True)(x_input)\nx = BatchNormalization()(x)\nx = ConvLSTM2D(16, 3, strides = 1, padding=\'same\', dilation_rate = 2,return_sequences=True)(x)\nx = BatchNormalization()(x)\nx = ConvLSTM2D(8, 3, strides = 1, padding=\'same\', dilation_rate = 2,return_sequences=True)(x)\nx = BatchNormalization()(x) # Shape=(None, None, 128, 128, 8) \n# Commented: x = Flatten()(x)\nRun Code Online (Sandbox Code Playgroud)\n其次,您可以使用一层或多层处理表格数据Dense。例如:
dim = 10\nx_tab_input = Input(shape=(5))\nx_tab = Dense(100, activation="relu")(x_tab_input)\nx_tab = Dense(dim, activation="relu")(x_tab)\n# x_tab = Flatten()(x_tab) # Note: Flattening a 2D tensor leaves the tensor unchanged\nRun Code Online (Sandbox Code Playgroud)\n第三,我们将张量流操作tf.tile包装在Lambda层中,有效地创建张量的副本x_tab,使其与所需的形状匹配:
def repeat_tabular(x_tab):\n h = x_tab[:, None, None, None, :] # Shape=(bs, 1, 1, 1, dim)\n h = tf.tile(h, [1, 3, 128, 128, 1]) # Shape=(bs, 3, 128, 128, dim)\n return h\nx_tab = Lambda(repeat_tabular)(x_tab)\nRun Code Online (Sandbox Code Playgroud)\n最后,我们沿着最后一个轴连接x和 平铺x_tab张量(您也可以考虑沿着第一个轴连接,对应于通道维度)
concat = Concatenate(axis=-1)([x, x_tab]) #\xc2\xa0Shape=(3,128,128,8+dim)\noutput = concat\noutput = Conv3D(filters=3, kernel_size=(3, 3, 3), activation=\'relu\', padding="same")(output)\n# ...\nRun Code Online (Sandbox Code Playgroud)\n请注意,该解决方案可能有点幼稚,因为模型没有将图像的输入序列编码为低维表示,从而限制了网络的感受野并可能导致性能下降。
\n选项 2:与自动编码器和U-Net类似,可能需要将图像序列编码为低维表示,以便丢弃不需要的变化(例如噪声),同时保留有意义的信号(例如推断下一个信号所需的信号) 3 个序列图像)。这可以通过以下方式实现:
\n首先,将输入图像序列编码为低维二维张量。例如,类似以下内容:
\nx_input = Input(shape=(None, 128, 128, 3))\nx = ConvLSTM2D(32, 3, strides = 1, padding=\'same\', dilation_rate = 2,return_sequences=True)(x_input)\nx = BatchNormalization()(x)\nx = ConvLSTM2D(16, 3, strides = 1, padding=\'same\', dilation_rate = 2,return_sequences=True)(x)\nx = BatchNormalization()(x)\nx = ConvLSTM2D(8, 3, strides = 1, padding=\'same\', dilation_rate = 2, return_sequences=False)(x)\nx = BatchNormalization()(x)\nx = Flatten()(x)\nx = Dense(64, activation=\'relu\')(x)\nRun Code Online (Sandbox Code Playgroud)\n请注意,最后一个ConvLSTM2D不返回序列。您可能想要探索不同的编码器来达到这一点(例如,您也可以在此处使用池化层)。
其次,使用图层处理表格数据Dense。例如:
dim = 10\nx_tab_input = Input(shape=(5))\nx_tab = Dense(100, activation="relu")(x_tab_input)\nx_tab = Dense(dim, activation="relu")(x_tab)\nRun Code Online (Sandbox Code Playgroud)\n第三,连接前两个流中的数据:
\nconcat = Concatenate(axis=-1)([x, x_tab])\nRun Code Online (Sandbox Code Playgroud)\n第四,使用Dense+Reshape层将连接的向量投影到一系列低分辨率图像中:
h = Dense(3 * 32 * 32 * 3)(concat)\noutput = Reshape((3, 32, 32, 3))(h)\nRun Code Online (Sandbox Code Playgroud)\n的形状output允许将图像上采样为 的形状(128, 128, 3),但它是任意的(例如,您可能还想在这里进行实验)。
最后,应用一个或多个Conv3DTranspose层以获得所需的输出(例如 3 个形状为 的图像(128, 128, 3))。
output = tf.keras.layers.Conv3DTranspose(filters=50, kernel_size=(3, 3, 3),\n strides=(1, 2, 2), padding=\'same\',\n activation=\'relu\')(output)\noutput = tf.keras.layers.Conv3DTranspose(filters=3, kernel_size=(3, 3, 3),\n strides=(1, 2, 2), padding=\'same\',\n activation=\'relu\')(output) # Shape=(None, 3, 128, 128, 3)\nRun Code Online (Sandbox Code Playgroud)\n这里讨论转置卷积层背后的基本原理。本质上,该层与正常卷积的方向相反 - 它允许将低分辨率图像上采样为高分辨率图像。Conv3DTranspose
| 归档时间: |
|
| 查看次数: |
246 次 |
| 最近记录: |