Keras:骰子系数损失函数为负,并且随着时期的增加而增加

Deb*_*eba 1 python machine-learning deep-learning keras loss-function

根据Keras的骰子系数损失函数的实现,损失为骰子系数计算值的负值。损失应该随着时期的减少而减少,但是通过这种实现,我自然会总是得到负损失,并且损失随着时期的减少而减少,即从0移向负无穷大,而不是接近于0。如果我使用(1-骰子co-eff)而不是(-dice co-eff)作为损失,这是错误的吗? 这是完整的Keras实现(我正在谈论):https : //github.com/jocicmarko/ultrasound-nerve-segmentation/blob/master/train.py

smooth = 1.

def dice_coef(y_true, y_pred):
    y_true_f = K.flatten(y_true)
    y_pred_f = K.flatten(y_pred)
    intersection = K.sum(y_true_f * y_pred_f)
    return (2. * intersection + smooth) / (K.sum(y_true_f) + K.sum(y_pred_f) + smooth)


def dice_coef_loss(y_true, y_pred):
return -dice_coef(y_true, y_pred)
Run Code Online (Sandbox Code Playgroud)

我已经与您分享了我的实验日志,尽管仅记录了2个纪元:

Train on 2001 samples, validate on 501 samples
Epoch 1/2
Epoch 00001: loss improved from inf to -0.73789, saving model to unet.hdf5
 - 3229s - loss: -7.3789e-01 - dice_coef: 0.7379 - val_loss: -7.9304e-01 - val_dice_coef: 0.7930
Epoch 2/2
Epoch 00002: loss improved from -0.73789 to -0.81037, saving model to unet.hdf5
 - 3077s - loss: -8.1037e-01 - dice_coef: 0.8104 - val_loss: -8.2842e-01 - val_dice_coef: 0.8284
predict test data
9/9 [==============================] - 4s 429ms/step
dict_keys(['val_dice_coef', 'loss', 'val_loss', 'dice_coef'])
Run Code Online (Sandbox Code Playgroud)

小智 6

无论是1-dice_coef或者-dice_coef应该做收敛没有什么区别。但是 1-dice_coef,由于值在[0,1]范围内,而不是[-1,0]范围内,因此提供了更熟悉的监视功能