Keras网络acc零且损耗非常低

Beh*_*dad 0 python deep-learning keras tensorflow recurrent-neural-network

我有网络:

Tensor("input_1:0", shape=(?, 5, 1), dtype=float32)
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input_1 (InputLayer)         (None, 5, 1)              0         
_________________________________________________________________
bidirectional_1 (Bidirection (None, 5, 64)             2176      
_________________________________________________________________
activation_1 (Activation)    (None, 5, 64)             0         
_________________________________________________________________
bidirectional_2 (Bidirection (None, 5, 128)            16512     
_________________________________________________________________
activation_2 (Activation)    (None, 5, 128)            0         
_________________________________________________________________
bidirectional_3 (Bidirection (None, 1024)              656384    
_________________________________________________________________
activation_3 (Activation)    (None, 1024)              0         
_________________________________________________________________
dense_1 (Dense)              (None, 1)                 1025      
_________________________________________________________________
p_re_lu_1 (PReLU)            (None, 1)                 1         
=================================================================
Total params: 676,098
Trainable params: 676,098
Non-trainable params: 0
_________________________________________________________________
None
Train on 27496 samples, validate on 6875 samples
Run Code Online (Sandbox Code Playgroud)

我适合并通过以下方式进行编译:

model.compile(loss='mse',optimizer=Adamx,metrics=['accuracy'])
model.fit(x_train,y_train,batch_size=100,epochs=10,validation_data=(x_test,y_test),verbose=2)
Run Code Online (Sandbox Code Playgroud)

当我运行它并还在看不见的数据上对其进行评估时,它以非常低的损失返回0.0精度。我不知道是什么问题。

Epoch 10/10
 - 29s - loss: 1.6972e-04 - acc: 0.0000e+00 - val_loss: 1.7280e-04 - val_acc: 0.0000e+00
Run Code Online (Sandbox Code Playgroud)

Mit*_*iku 6

您将得到什么。您的模型运行正常,这是您的度量标准不正确。损失功能的目的是使损失最小化,而不是增加精度。

由于您将PRelu用作最后一层的激活功能,因此始终会从网络获得浮动输出。比较这些浮点输出与实际标签以测量精度似乎不是正确的选择。这是因为,如果模型预测的值非常接近真实标签的值,则除非模型预测的值与真实标签的值完全相同,否则模型的准确性仍将为零。

例如,如果y_true为1.0,并且模型预测为0.99999,则此值不会增加模型的精度,因为1.0!= 0.99999

更新 度量标准功能的选择取决于问题的类型。Keras还提供用于实现自定义指标的功能。假设有问题的问题是线性回归,并且如果两个值之间的差小于0.01,则两个值相等,则自定义损失指标可以定义为:

import keras.backend as K
import tensorflow as tf
accepted_diff = 0.01
def linear_regression_equality(y_true, y_pred):
    diff = K.abs(y_true-y_pred)
    return K.mean(K.cast(diff < accepted_diff, tf.float32))
Run Code Online (Sandbox Code Playgroud)

现在您可以为模型使用此指标

model.compile(loss='mse',optimizer=Adamx,metrics=[linear_regression_equality])
Run Code Online (Sandbox Code Playgroud)