Put*_*orn 1 python machine-learning deep-learning keras tensorflow
我正在训练一个简单的MLP来使用Keras对MNIST数字进行分类.我遇到了一个问题,无论我使用什么优化器和学习率,模型都不会学习/下降,我的准确性几乎和随机猜测一样好.
这是代码:
model2=Sequential()
model2.add(Dense(output_dim=512, input_dim=784, activation='relu', name='dense1', kernel_initializer='random_uniform'))
model2.add(Dropout(0.2, name='dropout1'))
model2.add(Dense(output_dim=512, input_dim=512, activation='relu', name='dense2', kernel_initializer='random_uniform'))
model2.add(Dropout(0.2, name='dropout2'))
model2.add(Dense(output_dim=10, input_dim=512, activation='softmax', name='dense3', kernel_initializer='random_uniform'))
model2.compile(optimizer=Adagrad(), loss='categorical_crossentropy', metrics=['accuracy'])
model2.summary()
model2.fit(image_train.as_matrix(),img_keras_lb,batch_size=128,epochs = 100)
Run Code Online (Sandbox Code Playgroud)
和输出:
Epoch 1/100
33600/33600 [==============================] - 5s - loss: 14.6704 - acc: 0.0894
Epoch 2/100
33600/33600 [==============================] - 4s - loss: 14.6809 - acc: 0.0892
Epoch 3/100
33600/33600 [==============================] - 4s - loss: 14.6809 - acc: 0.0892
Epoch 4/100
33600/33600 [==============================] - 4s - loss: 14.6809 - acc: 0.0892
Epoch 5/100
33600/33600 [==============================] - 4s - loss: 14.6809 - acc: 0.0892
Epoch 6/100
33600/33600 [==============================] - 4s - loss: 14.6809 - acc: 0.0892
Epoch 7/100
33600/33600 [==============================] - 4s - loss: 14.6809 - acc: 0.0892
Epoch 8/100
33600/33600 [==============================] - 4s - loss: 14.6809 - acc: 0.0892
Epoch 9/100
33600/33600 [==============================] - 4s - loss: 14.6809 - acc: 0.0892
Epoch 10/100
33600/33600 [==============================] - 4s - loss: 14.6809 - acc: 0.0892
Epoch 11/100
33600/33600 [==============================] - 4s - loss: 14.6809 - acc: 0.0892
Epoch 12/100
33600/33600 [==============================] - 4s - loss: 14.6809 - acc: 0.0892
Epoch 13/100
33600/33600 [==============================] - 4s - loss: 14.6809 - acc: 0.0892
Epoch 14/100
33600/33600 [==============================] - 4s - loss: 14.6809 - acc: 0.0892
Epoch 15/100
33600/33600 [==============================] - 4s - loss: 14.6809 - acc: 0.0892
Epoch 16/100
33600/33600 [==============================] - 4s - loss: 14.6809 - acc: 0.0892
Epoch 17/100
33600/33600 [==============================] - 4s - loss: 14.6809 - acc: 0.0892
Epoch 18/100
33600/33600 [==============================] - 4s - loss: 14.6809 - acc: 0.0892
Epoch 19/100
33600/33600 [==============================] - 4s - loss: 14.6809 - acc: 0.0892
Epoch 20/100
33600/33600 [==============================] - 4s - loss: 14.6809 - acc: 0.0892
Epoch 21/100
33600/33600 [==============================] - 4s - loss: 14.6809 - acc: 0.0892
Epoch 22/100
33600/33600 [==============================] - 4s - loss: 14.6809 - acc: 0.0892
Run Code Online (Sandbox Code Playgroud)
如您所见,该模型没有学到任何东西.我也尝试过SGD,Adam,RMSprop,以及将批量减小到32,16等.
任何关于为什么会发生这种情况的指示非常感谢!
您正在使用ReLU激活,它基本上切断低于0的激活,并使用默认random_normal初始化,默认情况下具有参数keras.initializers.RandomUniform(minval=-0.05, maxval=0.05, seed=None).如您所见,初始化值非常接近0,其中一半(-.05到0)根本没有被激活.并且被激活的那些(0到0.05)非常缓慢地传播梯度.
我的猜测是改变初始化被周围0和n(这是ReLUs经营范围),你的模型应该迅速收敛.
| 归档时间: |
|
| 查看次数: |
235 次 |
| 最近记录: |