Keras 自定义损失函数(弹性网)

陳建勤*_*陳建勤 6 python machine-learning keras tensorflow loss-function

我正在尝试编写 Elastic-Net 代码。它看起来像:

elticnet公式

我想在 Keras 中使用这个损失函数:

def nn_weather_model():
    ip_weather = Input(shape = (30, 38, 5))
    x_weather = BatchNormalization(name='weather1')(ip_weather)
    x_weather = Flatten()(x_weather)
    Dense100_1 = Dense(100, activation='relu', name='weather2')(x_weather)
    Dense100_2 = Dense(100, activation='relu', name='weather3')(Dense100_1)
    Dense18 = Dense(18, activation='linear', name='weather5')(Dense100_2)
    model_weather = Model(inputs=[ip_weather], outputs=[Dense18])
    model = model_weather
    ip = ip_weather
    op = Dense18
    return model, ip, op
Run Code Online (Sandbox Code Playgroud)

我的损失函数是:

def cost_function(y_true, y_pred):
        return ((K.mean(K.square(y_pred - y_true)))+L1+L2)
   return cost_function
Run Code Online (Sandbox Code Playgroud)

它是 mse+L1+L2

L1 和 L2 是

weight1=model.layers[3].get_weights()[0]
weight2=model.layers[4].get_weights()[0]
weight3=model.layers[5].get_weights()[0]
L1 = Calculate_L1(weight1,weight2,weight3)
L2 = Calculate_L2(weight1,weight2,weight3)
Run Code Online (Sandbox Code Playgroud)

我使用Calculate_L1 函数来计算dense1、dense2 和dense3 的权重,然后再计算Calculate_L2。

当我训练RB_model.compile(loss = cost_function(),optimizer= 'RMSprop')L1 和 L2 变量时,并不是每批都更新。所以我尝试在 batch_begin 时使用回调,同时使用:

class update_L1L2weight(Callback):
    def __init__(self):
        super(update_L1L2weight, self).__init__()
    def on_batch_begin(self,batch,logs=None):
        weight1=model.layers[3].get_weights()[0]
        weight2=model.layers[4].get_weights()[0]
        weight3=model.layers[5].get_weights()[0]
        L1 = Calculate_L1(weight1,weight2,weight3)
        L2 = Calculate_L2(weight1,weight2,weight3)
Run Code Online (Sandbox Code Playgroud)

如何在batch_begin 中使用回调计算L1 和L2 完成,并将L1,L2 变量传递给损失函数?

tod*_*day 4

您可以简单地对每一层使用Keras 中内置的权重正则化。为此,您可以使用kernel_regularizer层的参数并为其指定正则化器。例如:

from keras import regularizers

model.add(Dense(..., kernel_regularizer=regularizers.l2(0.1)))
Run Code Online (Sandbox Code Playgroud)

这些正则化将创建一个损失张量,该损失张量将添加到损失函数中,如Keras 源代码中实现的那样:

# Add regularization penalties
# and other layer-specific losses.
for loss_tensor in self.losses:
    total_loss += loss_tensor
Run Code Online (Sandbox Code Playgroud)