不管学习率如何,损失都不会改变

Rav*_*310 0 python deep-learning keras tensorflow

我建立了一个深度学习模型,与VGG网络有点类似。我正在将Keras与Tensorflow后端一起使用。模型摘要如下:

model = Sequential()
model.add(Conv2D(64, 3, border_mode='same', activation='relu', input_shape=(180,320,3)))
model.add(Conv2D(64, 3, border_mode='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(2,2), strides=2))
model.add(Conv2D(64, 3, border_mode='same', activation='relu'))
model.add(Conv2D(64, 3, border_mode='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(2,2), strides=2))
model.add(Conv2D(128, 3, border_mode='same', activation='relu'))
model.add(Conv2D(128, 3, border_mode='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(2,2), strides=2))
model.add(Conv2D(128, 3, border_mode='same', activation='relu'))
model.add(Conv2D(128, 3, border_mode='same', activation='relu'))
model.add(Flatten())
model.add(Dropout(0.5))
model.add(Dense(1024, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(9,  activation='relu'))
Run Code Online (Sandbox Code Playgroud)

我尝试了优化器(SGD,Adam等),损失(MSE,MAE等),批处理大小(32和64)的不同组合。我什至试验了学习率的范围从0.001到10000。但是,即使经过20个纪元,无论我使用哪种损失函数,验证损失都保持不变。训练损失的变化很小。我究竟做错了什么?

我的网络应该训练什么:给定输入图像,网络需要预测可以从该图像得出的9个真实值的集合。

培训期间的终端输出:

    Epoch 1/100
    4800/4800 [==============================] - 96s 20ms/step - loss: 133.6534 - mean_absolute_error: 133.6534 - val_loss: 132.3744 - val_mean_absolute_error: 132.3744
    Epoch 2/100
    4800/4800 [==============================] - 49s 10ms/step - loss: 132.8033 - mean_absolute_error: 132.8033 - val_loss: 132.3744 - val_mean_absolute_error: 132.3744
    Epoch 3/100
    4800/4800 [==============================] - 51s 11ms/step - loss: 132.8033 - mean_absolute_error: 132.8033 - val_loss: 132.3744 - val_mean_absolute_error: 132.3744
    Epoch 4/100
    4800/4800 [==============================] - 51s 11ms/step - loss: 132.8033 - mean_absolute_error: 132.8033 - val_loss: 132.3744 - val_mean_absolute_error: 132.3744
    Epoch 5/100
    4800/4800 [==============================] - 51s 11ms/step - loss: 132.8033 - mean_absolute_error: 132.8033 - val_loss: 132.3744 - val_mean_absolute_error: 132.3744
    Epoch 6/100
    4800/4800 [==============================] - 51s 11ms/step - loss: 132.8033 - mean_absolute_error: 132.8033 - val_loss: 132.3744 - val_mean_absolute_error: 132.3744
    Epoch 7/100
    4800/4800 [==============================] - 51s 11ms/step - loss: 132.8033 - mean_absolute_error: 132.8033 - val_loss: 132.3744 - val_mean_absolute_error: 132.3744
    Epoch 8/100
    4800/4800 [==============================] - 51s 11ms/step - loss: 132.8033 - mean_absolute_error: 132.8033 - val_loss: 132.3744 - val_mean_absolute_error: 132.3744
    Epoch 9/100
    4800/4800 [==============================] - 51s 11ms/step - loss: 132.8033 - mean_absolute_error: 132.8033 - val_loss: 132.3744 - val_mean_absolute_error: 132.3744
    Epoch 10/100
    4800/4800 [==============================] - 51s 11ms/step - loss: 132.8033 - mean_absolute_error: 132.8033 - val_loss: 132.3744 - val_mean_absolute_error: 132.3744
    Epoch 11/100
    4800/4800 [==============================] - 51s 11ms/step - loss: 132.8033 - mean_absolute_error: 132.8033 - val_loss: 132.3744 - val_mean_absolute_error: 132.3744
    Epoch 12/100
    4800/4800 [==============================] - 51s 11ms/step - loss: 132.8033 - mean_absolute_error: 132.8033 - val_loss: 132.3744 - val_mean_absolute_error: 132.3744
    Epoch 13/100
    4800/4800 [==============================] - 50s 10ms/step - loss: 132.8033 - mean_absolute_error: 132.8033 - val_loss: 132.3744 - val_mean_absolute_error: 132.3744
    Epoch 14/100
    4800/4800 [==============================] - 51s 11ms/step - loss: 132.8033 - mean_absolute_error: 132.8033 - val_loss: 132.3744 - val_mean_absolute_error: 132.3744
    Epoch 15/100
    4800/4800 [==============================] - 51s 11ms/step - loss: 132.8033 - mean_absolute_error: 132.8033 - val_loss: 132.3744 - val_mean_absolute_error: 132.3744
    Epoch 16/100
    4800/4800 [==============================] - 51s 11ms/step - loss: 132.8033 - mean_absolute_error: 132.8033 - val_loss: 132.3744 - val_mean_absolute_error: 132.3744
    Epoch 17/100
    4800/4800 [==============================] - 51s 11ms/step - loss: 132.8033 - mean_absolute_error: 132.8033 - val_loss: 132.3744 - val_mean_absolute_error: 132.3744
    Epoch 18/100
    4800/4800 [==============================] - 51s 11ms/step - loss: 132.8033 - mean_absolute_error: 132.8033 - val_loss: 132.3744 - val_mean_absolute_error: 132.3744
    Epoch 19/100
    4800/4800 [==============================] - 51s 11ms/step - loss: 132.8033 - mean_absolute_error: 132.8033 - val_loss: 132.3744 - val_mean_absolute_error: 132.3744
    Epoch 20/100
    4800/4800 [==============================] - 51s 11ms/step - loss: 132.8033 - mean_absolute_error: 132.8033 - val_loss: 132.3744 - val_mean_absolute_error: 132.3744
    Epoch 21/100
    4800/4800 [==============================] - 50s 10ms/step - loss: 132.8033 - mean_absolute_error: 132.8033 - val_loss: 132.3744 - val_mean_absolute_error: 132.3744
Run Code Online (Sandbox Code Playgroud)

Dan*_*ler 5

relu

不要随意使用relu!它具有恒定的零区域,没有梯度。卡住是完全正常的。

  • 最严重的错误是在最后一层有relu。
    • 如果您希望输出从0到无穷大,请使用'softplus'
    • 如果要在0和1之间使用 'sigmoid'
    • 如果要在-1和+1之间使用 'tanh'
  • 你的学习速度是巨大的。随着RELU,需要的学习率:
    • 0.0001和下面。
  • 尝试其他不会卡住的激活
  • 尝试在激活之前添加批处理规范化(这样,无论什么情况,您都可以确保某些值大于零):
    • 这也使您获得更高的学习率

model.add(Conv2D(..... , activation='linear'))
model.add(BatchNormalization())
model.add(Activation('relu'))
Run Code Online (Sandbox Code Playgroud)