Ove*_*gon 1 python deep-learning keras tensorflow
设置权重衰减(例如 l2 惩罚)的指南是什么?主要是,我如何跟踪它在整个训练过程中是否“有效”?(即,与没有 l2 惩罚相比,权重是否实际上正在衰减,以及衰减了多少)。
一种常见的方法是“尝试一系列值,看看什么有效”——但它的陷阱是缺乏正交性;l2=2e-4
可能在网络X中工作得最好,但在网络Y中则不然。解决方法是以子网络方式引导权重衰减:(1) 组层(例如单独的Conv1D
堆栈和LSTM
层),(2) 设置目标权重范数,(3) 跟踪。
(1):参见此处;相同的参数和建议的权重值不适用于转换 - 因此需要各种分组
(2):一个合理的选择是正则化权重矩阵的l2-范数;接下来的问题是相对于哪个轴来计算它。面向特征提取的方法是选择通道轴(Keras 中的最后一个),产生长度 = 通道数 / 特征数的向量,以便每个元素都是通道的 l2 范数。
(3):l2-范数向量可以迭代地附加到列表中,或者可以将它们的平均值/最大值作为更简短的聚合统计数据 - 然后在训练结束时绘制。
完整的例子如下所示;关键函数 ,weights_norm
在底部给出,取自参见 RNN。我还推荐Keras AdamW来改善权重衰减。
解释:
wd=2e-3
输出层的衰减比 强2e-4
,但输入层的衰减不强,表明与瓶颈层存在平衡相互作用。wd=2e-3
相对于重量规范产生较小的方差2e-4
2e-3
,表明输出的梯度更强BatchNormalization
通过添加来探索行为是很有趣的代码及解释;完成以下工作:
训练并跟踪进度
n_batches
并wd
(l2 惩罚)n_epochs
l2_stats
字典来跟踪进度weights_norm()
并附加到l2_stats
预处理用于绘图的进度数据
omit_names
l2_stats
方便附加,但必须转换为np.ndarray
适当的暗淡;解压以便.shape == (n_epochs, n_layers, n_weights, n_batches) -> (n_rows, n_cols, hists_per_subplot)
. 请注意,这要求每层跟踪的权重矩阵数量相同阴谋
xlims
并在不同值ylim
之间进行均匀比较wd
np.mean
橙色)和np.max
。后者也是 Keras 处理maxnorm
权重正则化的方式。import numpy as np
import tensorflow as tf
import random
np.random.seed(1)
random.seed(2)
tf.compat.v1.set_random_seed(3)
from keras.layers import Input, Conv1D
from keras.models import Model
from keras.regularizers import l2
from see_rnn import weights_norm, features_hist_v2
########### Model & data funcs ################################################
def make_model(batch_shape, layer_kw={}):
"""Conv1D autoencoder"""
dim = batch_shape[-1]
bdim = dim // 2
ipt = Input(batch_shape=batch_shape)
x = Conv1D(dim, 8, activation='relu', **layer_kw)(ipt)
x = Conv1D(bdim, 1, activation='relu', **layer_kw)(x) # bottleneck
out = Conv1D(dim, 8, activation='linear', **layer_kw)(x)
model = Model(ipt, out)
model.compile('adam', 'mse')
return model
def make_data(batch_shape, n_batches):
X = Y = np.random.randn(n_batches, *batch_shape)
return X, Y
########### Train setup #######################################################
batch_shape = (32, 100, 64)
n_epochs = 5
n_batches = 200
wd = 2e-3
layer_kw = dict(padding='same', kernel_regularizer=l2(wd))
model = make_model(batch_shape, layer_kw)
X, Y = make_data(batch_shape, n_batches)
## Train ####################
l2_stats = {}
for epoch in range(n_epochs):
l2_stats[epoch] = {}
for i, (x, y) in enumerate(zip(X, Y)):
model.train_on_batch(x, y)
print(end='.')
verbose = bool(i == len(X) - 1) # if last epoch iter, print last results
if verbose:
print()
l2_stats[epoch] = weights_norm(model, [1, 3], l2_stats[epoch],
omit_names='bias', verbose=verbose)
print("Epoch", epoch + 1, "finished")
print()
########### Preprocess funcs ##################################################
def _get_weight_names(model, layer_names, omit_names):
weight_names= []
for name in layer_names:
layer = model.get_layer(name=name)
for w in layer.weights:
if not any(to_omit in w.name for to_omit in omit_names):
weight_names.append(w.name)
return weight_names
def _merge_layers_and_weights(l2_stats):
stats_merged = []
for stats in l2_stats.values():
x = np.array(list(stats.values())) # (layers, weights, stats, batches)
x = x.reshape(-1, *x.shape[2:]) # (layers-weights, stats, batches)
stats_merged.append(x)
return stats_merged # (epochs, layer-weights, stats, batches)
########### Plot setup ########################################################
ylim = 5
xlims = (.4, 1.2)
omit_names = 'bias'
suptitle = "wd={:.0e}".format(wd).replace('0', '')
side_annot = "EP"
configs = {'side_annot': dict(xy=(.9, .9))}
layer_names = list(l2_stats[0].keys())
weight_names = _get_weight_names(model, layer_names, omit_names)
stats_merged = _merge_layers_and_weights(l2_stats)
## Plot ########
features_hist_v2(stats_merged, colnames=weight_names, title=suptitle,
xlims=xlims, ylim=ylim, side_annot=side_annot,
pad_xticks=True, configs=configs)
Run Code Online (Sandbox Code Playgroud)
归档时间: |
|
查看次数: |
561 次 |
最近记录: |