tru*_*mee 9 regression regularized keras
我试图在Keras中设置一个非线性回归问题.不幸的是,结果表明过度拟合正在发生.这是代码,
model = Sequential()
model.add(Dense(number_of_neurons, input_dim=X_train.shape[1], activation='relu', kernel_regularizer=regularizers.l2(0)))
model.add(Dense(int(number_of_neurons), activation = 'relu', kernel_regularizer=regularizers.l2(0)))
model.add(Dense(int(number_of_neurons), activation='relu', kernel_regularizer=regularizers.l2(0)))
model.add(Dense(int(number_of_neurons), activation='relu',kernel_regularizer=regularizers.l2(0)))
model.add(Dense(int(number_of_neurons), activation='relu',kernel_regularizer=regularizers.l2(0)))
model.add(Dense(outdim, activation='linear'))
Adam = optimizers.Adam(lr=0.001)
model.compile(loss='mean_squared_error', optimizer=Adam, metrics=['mae'])
model.fit(X, Y, epochs=1000, batch_size=500, validation_split=0.2, shuffle=True, verbose=2 , initial_epoch=0)
Run Code Online (Sandbox Code Playgroud)
没有正则化的结果在这里显示没有正则化.与验证相比,训练的平均绝对误差要小得多,并且两者都有固定的间隙,这是过度拟合的标志.
像这样为每一层指定了L2正则化,
model = Sequential()
model.add(Dense(number_of_neurons, input_dim=X_train.shape[1], activation='relu', kernel_regularizer=regularizers.l2(0.001)))
model.add(Dense(int(number_of_neurons), activation = 'relu', kernel_regularizer=regularizers.l2(0.001)))
model.add(Dense(int(number_of_neurons), activation='relu', kernel_regularizer=regularizers.l2(0.001)))
model.add(Dense(int(number_of_neurons), activation='relu',kernel_regularizer=regularizers.l2(0.001)))
model.add(Dense(int(number_of_neurons), activation='relu',kernel_regularizer=regularizers.l2(0.001)))
model.add(Dense(outdim, activation='linear'))
Adam = optimizers.Adam(lr=0.001)
model.compile(loss='mean_squared_error', optimizer=Adam, metrics=['mae'])
model.fit(X, Y, epochs=1000, batch_size=500, validation_split=0.2, shuffle=True, verbose=2 , initial_epoch=0)
Run Code Online (Sandbox Code Playgroud)
这些结果显示在这里L2正则化结果.测试的MAE接近培训,这很好.然而,训练的MAE很差,为0.03(没有正规化,它在0.0028处低得多).
如何通过正规化减少训练MAE?
Imr*_*ran 12
根据您的结果,您似乎需要找到适当数量的正则化,以平衡训练准确性和对测试集的良好推广.这可能与减少L2参数一样简单.尝试将lambda从0.001减少到0.0001并比较结果.
如果找不到L2的良好参数设置,可以尝试使用dropout正规化.只需model.add(Dropout(0.2))在每对密集层之间添加,并在必要时尝试辍学率.较高的辍学率对应于更多的正规化.
| 归档时间: |
|
| 查看次数: |
16385 次 |
| 最近记录: |