Tensorflow:如何在语义分割期间忽略特定标签?

mcE*_*nge 7 tensorflow

我正在使用tensorflow进行语义分割.在计算像素损失时如何告诉tensorflow忽略特定标签?

在这篇文章中读到,对于图像分类,可以将标签设置为-1,它将被忽略.如果这是真的,给定标签张量,我如何修改我的标签,以便将某些值更改为-1

在Matlab中它将是这样的:

ignore_label = 255
myLabelTensor(myLabelTensor == ignore_label) = -1
Run Code Online (Sandbox Code Playgroud)

但我不知道如何在TF中这样做?

一些背景信息:
这是标签的加载方式:

label_contents = tf.read_file(input_queue[1])
label = tf.image.decode_png(label_contents, channels=1)
Run Code Online (Sandbox Code Playgroud)

这就是目前计算损失的方式:

raw_output = net.layers['fc1_voc12']
prediction = tf.reshape(raw_output, [-1, n_classes])
label_proc = prepare_label(label_batch, tf.pack(raw_output.get_shape()[1:3]),n_classes)
gt = tf.reshape(label_proc, [-1, n_classes])

# Pixel-wise softmax loss.
loss = tf.nn.softmax_cross_entropy_with_logits(prediction, gt)
reduced_loss = tf.reduce_mean(loss)
Run Code Online (Sandbox Code Playgroud)

def prepare_label(input_batch, new_size, n_classes):
    """Resize masks and perform one-hot encoding.

    Args:
      input_batch: input tensor of shape [batch_size H W 1].
      new_size: a tensor with new height and width.

    Returns:
      Outputs a tensor of shape [batch_size h w 21]
      with last dimension comprised of 0's and 1's only.
    """
    with tf.name_scope('label_encode'):
        input_batch = tf.image.resize_nearest_neighbor(input_batch, new_size) # as labels are integer numbers, need to use NN interp.
        input_batch = tf.squeeze(input_batch, squeeze_dims=[3]) # reducing the channel dimension.
        input_batch = tf.one_hot(input_batch, depth=n_classes)
    return input_batch
Run Code Online (Sandbox Code Playgroud)

我正在使用tensorflow-deeplab-resnet模型,它使用caffe-tensorflow将 Caffe中实现的Resnet模型转换为tensorflow.

jde*_*esa 0

根据文档,tf.nn.softmax_cross_entropy_with_logits必须使用 上的有效概率分布进行调用labels,否则计算将不正确,并且使用tf.nn.sparse_softmax_cross_entropy_with_logits负标签(在您的情况下可能更方便)将导致错误或返回 NaN 值。我不会依赖它来忽略一些标签。

我要做的是将被忽略类的 logits 替换为那些像素中的无穷大,其中正确的类是被忽略的类,因此它们不会对损失产生任何影响:

ignore_label = ...
# Make zeros everywhere except for the ignored label
input_batch_ignored = tf.concat(input_batch.ndims - 1,
    [tf.zeros_like(input_batch[:, :, :, :ignore_label]),
     tf.expand_dims(input_batch[:, :, :, ignore_label], -1),
     tf.zeros_like(input_batch[:, :, :, ignore_label + 1:])])
# Make corresponding logits "infinity" (a big enough number)
predictions_fix = tf.select(input_batch_ignored > 0,
    1e30 * tf.ones_like(predictions), predictions)
# Compute loss with fixed logits
loss = tf.nn.softmax_cross_entropy_with_logits(prediction, gt)
Run Code Online (Sandbox Code Playgroud)

唯一的问题是,您正在考虑被忽略类的像素总是被正确预测,这意味着包含大量此类像素的图像的损失将人为地变小。根据具体情况,这可能很重要,也可能不重要,但如果你想真正准确,你必须根据未忽略的像素数量对每个图像的损失进行加权,而不是仅仅取平均值。

# Count relevant pixels on each image
input_batch_relevant = 1 - input_batch_ignored
input_batch_weight = tf.reduce_sum(input_batch_relevant, [1, 2, 3])
# Compute relative weights
input_batch_weight = input_batch_weight / tf.reduce_sum(input_batch_weight)
# Compute reduced loss according to weights
reduced_loss = tf.reduce_sum(loss * input_batch_weight)
Run Code Online (Sandbox Code Playgroud)