tf.nn.l2_loss和tf.contrib.layers.l2_regularizer是否与在张量流中添加L2正则化的目的相同?

Vis*_*ram 5 python deep-learning tensorflow

看来,张量流中的L2正则化可以通过两种方式实现:

(i)使用tf.nn.l2_loss或(ii)使用tf.contrib.layers.l2_regularizer

这两种方法都可以达到同样的目的吗?如果它们不同,它们有什么不同?

MZH*_*ZHm 8

他们做同样的事情(至少现在).唯一的区别是tf.contrib.layers.l2_regularizer乘以tf.nn.l2_lossby 的结果scale.

看看tf.contrib.layers.l2_regularizer[ https://github.com/tensorflow/tensorflow/blob/r1.1/tensorflow/contrib/layers/python/layers/regularizers.py]的实现:

def l2_regularizer(scale, scope=None):
  """Returns a function that can be used to apply L2 regularization to weights.
  Small values of L2 can help prevent overfitting the training data.
  Args:
    scale: A scalar multiplier `Tensor`. 0.0 disables the regularizer.
    scope: An optional scope name.
  Returns:
    A function with signature `l2(weights)` that applies L2 regularization.
  Raises:
    ValueError: If scale is negative or if scale is not a float.
  """
  if isinstance(scale, numbers.Integral):
    raise ValueError('scale cannot be an integer: %s' % (scale,))
  if isinstance(scale, numbers.Real):
    if scale < 0.:
      raise ValueError('Setting a scale less than 0 on a regularizer: %g.' %
                       scale)
    if scale == 0.:
      logging.info('Scale of 0 disables regularizer.')
      return lambda _: None

  def l2(weights):
    """Applies l2 regularization to weights."""
    with ops.name_scope(scope, 'l2_regularizer', [weights]) as name:
      my_scale = ops.convert_to_tensor(scale,
                                       dtype=weights.dtype.base_dtype,
                                       name='scale')
      return standard_ops.multiply(my_scale, nn.l2_loss(weights), name=name)

  return l2
Run Code Online (Sandbox Code Playgroud)

您感兴趣的行是:

  return standard_ops.multiply(my_scale, nn.l2_loss(weights), name=name)
Run Code Online (Sandbox Code Playgroud)

所以在实践中,内部tf.contrib.layers.l2_regularizer调用tf.nn.l2_loss简单地将结果乘以scale参数.