使用可变大小的输入训练完全卷积的神经网络在Keras/TensorFlow中花费了不合理的长时间

Ser*_*ych 13 conv-neural-network keras tensorflow

我正在尝试实现一个FCNN用于图像分类,可以接受可变大小的输入.该模型使用TensorFlow后端在Keras中构建.

考虑以下玩具示例:

model = Sequential()

# width and height are None because we want to process images of variable size 
# nb_channels is either 1 (grayscale) or 3 (rgb)
model.add(Convolution2D(32, 3, 3, input_shape=(nb_channels, None, None), border_mode='same'))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Convolution2D(32, 3, 3, border_mode='same'))
model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Convolution2D(16, 1, 1))
model.add(Activation('relu'))

model.add(Convolution2D(8, 1, 1))
model.add(Activation('relu'))

# reduce the number of dimensions to the number of classes
model.add(Convolution2D(nb_classses, 1, 1))
model.add(Activation('relu'))

# do global pooling to yield one value per class
model.add(GlobalAveragePooling2D())

model.add(Activation('softmax'))
Run Code Online (Sandbox Code Playgroud)

这个模型运行正常,但我遇到了性能问题.与固定大小的输入训练相比,对可变大小的图像进行训练需要不合理的长时间.如果我将所有图像的大小调整为数据集中的最大大小,则训练模型所需的时间远远少于对可变大小输入的训练.那么input_shape=(nb_channels, None, None)指定可变大小输入的正确方法是什么?有没有办法缓解这个性能问题?

更新

model.summary() 对于具有3个类和灰度图像的模型:

Layer (type)                     Output Shape          Param #     Connected to                     
====================================================================================================
convolution2d_1 (Convolution2D)  (None, 32, None, None 320         convolution2d_input_1[0][0]      
____________________________________________________________________________________________________
activation_1 (Activation)        (None, 32, None, None 0           convolution2d_1[0][0]            
____________________________________________________________________________________________________
maxpooling2d_1 (MaxPooling2D)    (None, 32, None, None 0           activation_1[0][0]               
____________________________________________________________________________________________________
convolution2d_2 (Convolution2D)  (None, 32, None, None 9248        maxpooling2d_1[0][0]             
____________________________________________________________________________________________________
maxpooling2d_2 (MaxPooling2D)    (None, 32, None, None 0           convolution2d_2[0][0]            
____________________________________________________________________________________________________
convolution2d_3 (Convolution2D)  (None, 16, None, None 528         maxpooling2d_2[0][0]             
____________________________________________________________________________________________________
activation_2 (Activation)        (None, 16, None, None 0           convolution2d_3[0][0]            
____________________________________________________________________________________________________
convolution2d_4 (Convolution2D)  (None, 8, None, None) 136         activation_2[0][0]               
____________________________________________________________________________________________________
activation_3 (Activation)        (None, 8, None, None) 0           convolution2d_4[0][0]            
____________________________________________________________________________________________________
convolution2d_5 (Convolution2D)  (None, 3, None, None) 27          activation_3[0][0]               
____________________________________________________________________________________________________
activation_4 (Activation)        (None, 3, None, None) 0           convolution2d_5[0][0]            
____________________________________________________________________________________________________
globalaveragepooling2d_1 (Global (None, 3)             0           activation_4[0][0]               
____________________________________________________________________________________________________
activation_5 (Activation)        (None, 3)             0           globalaveragepooling2d_1[0][0]   
====================================================================================================
Total params: 10,259
Trainable params: 10,259
Non-trainable params: 0
Run Code Online (Sandbox Code Playgroud)

Mar*_*ris 0

不同尺寸的图像意味着相似事物在不同比例下的图像。如果比例差异很大,则随着图像尺寸的减小,相似事物的相对位置将从画面中心向左上方移动。所示的(简单)网络架构具有空间感知能力,因此模型收敛速度下降是一致的,因为不同规模的数据会不一致。这种架构不太适合在不同或多个地方查找相同的东西。

\n\n

一定程度的剪切、旋转、镜像将有助于模型泛化,但会重新缩放到一致的大小。因此,当您调整大小时,您可以解决缩放问题并使输入数据在空间上一致。

\n\n

简而言之,我认为这个网络架构不适合/不能够胜任您赋予它的任务,即各种规模的任务。

\n