Xpl*_*n5k 5 python-3.x keras tensorflow
我正在尝试开发一种图像去噪模型。我一直在阅读如何计算神经网络的内存使用情况,标准方法似乎是:
params = depth_n x (kernel_width x kernel_height) x depth_n-1 + depth
Run Code Online (Sandbox Code Playgroud)
通过将网络中的所有参数加在一起,我最终得到1,038,097,大约是4.2MB。自Keras最终获得1,038,497个参数以来,我似乎在最后一层进行了一些错误的计算。不过,这是一个很小的差异。4.2MB只是参数,我看到某个地方应该乘以3以包含反向传播和其他所需的计算。则大约为13MB。
我大约有11 GB的GPU内存可以使用,但是此模型已经用尽。所有额外需要的内存从何而来?我想念什么?我知道此帖子可能被标记为重复,但其他帖子似乎都没有抓住我所询问的话题。
我的模特:
def network(self):
weights = RandomUniform(minval=-0.05, maxval=0.05, seed=None)
input_img = Input(shape=(self.img_rows, self.img_cols, self.channels))
conv1 = Conv2D(1024, (3,3), activation='tanh', kernel_initializer=weights,
padding='same', use_bias=True)(input_img)
conv2 = Conv2D(64, (3,3), activation='tanh', kernel_initializer=weights,
padding='same', use_bias=True)(conv1)
conv3 = Conv2D(64, (3,3), activation='tanh', kernel_initializer=weights,
padding='same', use_bias=True)(conv2)
conv4 = Conv2D(64, (3,3), activation='relu', kernel_initializer=weights,
padding='same', use_bias=True)(conv3)
conv5 = Conv2D(64, (7,7), activation='relu', kernel_initializer=weights,
padding='same', use_bias=True)(conv4)
conv6 = Conv2D(64, (5,5), activation='relu', kernel_initializer=weights,
padding='same', use_bias=True)(conv5)
conv7 = Conv2D(32, (5,5), activation='relu', kernel_initializer=weights,
padding='same', use_bias=True)(conv6)
conv8 = Conv2D(32, (3,3), activation='relu', kernel_initializer=weights,
padding='same', use_bias=True)(conv7)
conv9 = Conv2D(16, (3,3), activation='relu', kernel_initializer=weights,
padding='same', use_bias=True)(conv8)
decoded = Conv2D(1, (5,5), kernel_initializer=weights,
padding='same', activation='sigmoid', use_bias=True)(conv8)
return input_img, decoded
def compiler(self):
self.model.compile(optimizer='RMSprop', loss='mse')
self.model.summary()
Run Code Online (Sandbox Code Playgroud)
我认为我的模型在很多方面都是很愚蠢的,并且有很多方面需要改进(辍学,其他过滤器大小和数量,优化器等),所有建议都很高兴,但实际问题仍然存在。为什么此模型会消耗大量内存?是因为深度很高conv1
吗?
型号摘要:
Using TensorFlow backend.
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) (None, 1751, 480, 1) 0
_________________________________________________________________
conv2d_1 (Conv2D) (None, 1751, 480, 1024) 10240
_________________________________________________________________
conv2d_2 (Conv2D) (None, 1751, 480, 64) 589888
_________________________________________________________________
conv2d_3 (Conv2D) (None, 1751, 480, 64) 36928
_________________________________________________________________
conv2d_4 (Conv2D) (None, 1751, 480, 64) 36928
_________________________________________________________________
conv2d_5 (Conv2D) (None, 1751, 480, 64) 200768
_________________________________________________________________
conv2d_6 (Conv2D) (None, 1751, 480, 64) 102464
_________________________________________________________________
conv2d_7 (Conv2D) (None, 1751, 480, 32) 51232
_________________________________________________________________
conv2d_8 (Conv2D) (None, 1751, 480, 32) 9248
_________________________________________________________________
conv2d_10 (Conv2D) (None, 1751, 480, 1) 801
=================================================================
Total params: 1,038,497
Trainable params: 1,038,497
Non-trainable params: 0
_________________________________________________________________
Run Code Online (Sandbox Code Playgroud)
The*_*ube 10
你是对的,这是由于conv1
. 您必须计算的是存储激活所需的内存:
如您的 所示model.summary()
,该层的输出大小为(None, 1751, 480, 1024)
. 对于单个图像,这是1751*480*1024
像素总数。由于您的图像可能在 中float32
,因此每个像素需要 4 个字节来存储。所以这一层的输出需要1751*480*1024*4
字节,对于这一层来说,每个图像大约需要 3.2 GB。
如果您将过滤器的数量更改为 64,则每个图像只需要大约 200 MB。
更改过滤器的数量或将批量大小更改为 1。
归档时间: |
|
查看次数: |
1824 次 |
最近记录: |