Cla*_*ash 7 conv-neural-network tensorflow
在张量流上的cifar10示例中,卷积层似乎没有重量衰减.实际上除了两个完全连接的层之外,任何层都没有重量衰减.这是一种常见做法吗?我认为重量衰减适用于所有重量(偏差除外).
作为参考,这里是相关代码(wd是重量衰减因子):
# conv1
with tf.variable_scope('conv1') as scope:
kernel = _variable_with_weight_decay('weights', shape=[5, 5, 3, 64],
stddev=1e-4, wd=0.0)
conv = tf.nn.conv2d(images, kernel, [1, 1, 1, 1], padding='SAME')
biases = _variable_on_cpu('biases', [64], tf.constant_initializer(0.0))
bias = tf.nn.bias_add(conv, biases)
conv1 = tf.nn.relu(bias, name=scope.name)
_activation_summary(conv1)
# pool1
pool1 = tf.nn.max_pool(conv1, ksize=[1, 3, 3, 1], strides=[1, 2, 2, 1],
padding='SAME', name='pool1')
# norm1
norm1 = tf.nn.lrn(pool1, 4, bias=1.0, alpha=0.001 / 9.0, beta=0.75,
name='norm1')
# conv2
with tf.variable_scope('conv2') as scope:
kernel = _variable_with_weight_decay('weights', shape=[5, 5, 64, 64],
stddev=1e-4, wd=0.0)
conv = tf.nn.conv2d(norm1, kernel, [1, 1, 1, 1], padding='SAME')
biases = _variable_on_cpu('biases', [64], tf.constant_initializer(0.1))
bias = tf.nn.bias_add(conv, biases)
conv2 = tf.nn.relu(bias, name=scope.name)
_activation_summary(conv2)
# norm2
norm2 = tf.nn.lrn(conv2, 4, bias=1.0, alpha=0.001 / 9.0, beta=0.75,
name='norm2')
# pool2
pool2 = tf.nn.max_pool(norm2, ksize=[1, 3, 3, 1],
strides=[1, 2, 2, 1], padding='SAME', name='pool2')
# local3
with tf.variable_scope('local3') as scope:
# Move everything into depth so we can perform a single matrix multiply.
dim = 1
for d in pool2.get_shape()[1:].as_list():
dim *= d
reshape = tf.reshape(pool2, [FLAGS.batch_size, dim])
weights = _variable_with_weight_decay('weights', shape=[dim, 384],
stddev=0.04, wd=0.004)
biases = _variable_on_cpu('biases', [384], tf.constant_initializer(0.1))
local3 = tf.nn.relu(tf.matmul(reshape, weights) + biases, name=scope.name)
_activation_summary(local3)
# local4
with tf.variable_scope('local4') as scope:
weights = _variable_with_weight_decay('weights', shape=[384, 192],
stddev=0.04, wd=0.004)
biases = _variable_on_cpu('biases', [192], tf.constant_initializer(0.1))
local4 = tf.nn.relu(tf.matmul(local3, weights) + biases, name=scope.name)
_activation_summary(local4)
# softmax, i.e. softmax(WX + b)
with tf.variable_scope('softmax_linear') as scope:
weights = _variable_with_weight_decay('weights', [192, NUM_CLASSES],
stddev=1/192.0, wd=0.0)
biases = _variable_on_cpu('biases', [NUM_CLASSES],
tf.constant_initializer(0.0))
softmax_linear = tf.add(tf.matmul(local4, weights), biases, name=scope.name)
_activation_summary(softmax_linear)
return softmax_linear
Run Code Online (Sandbox Code Playgroud)
权重衰减并不一定会提高性能。根据我自己的经验,我经常发现我的模型在权重衰减明显的情况下表现更差(通过保留集上的某些指标来衡量)。这是一种需要注意的有用的正则化形式,但您不需要将其添加到每个模型中,而不考虑是否需要它或比较有和没有的性能。
至于仅模型部分的权重衰减与整个模型的权重衰减相比是否更好,仅以这种方式正则化部分权重似乎不太常见。然而,我不知道这有什么理论上的原因。一般来说,神经网络已经有太多的超参数需要配置。是否使用权重衰减已经是一个问题,如果使用的话,对权重进行正则化的力度有多大。如果您还想知道我应该以这种方式规范哪些层,那么您很快就会用完时间来测试可以为每个层打开和关闭它的所有不同方式的性能。
我想有些模型只能从模型的一部分的权重衰减中受益;我认为这种做法并不常见,因为很难测试所有可能性并找出哪种效果最好。
| 归档时间: |
|
| 查看次数: |
1191 次 |
| 最近记录: |