在 unet 架构中使用自定义权重图的正确方法

Beg*_*ner 9 python image-segmentation pytorch semantic-segmentation

u-net 架构中有一个著名的技巧,就是使用自定义权重图来提高准确性。下面是它的详细信息:

在此处输入图片说明

现在,通过在这里和其他多个地方询问,我了解了 2 种方法。我想知道哪个是正确的,或者还有其他更正确的正确方法吗?

  1. 首先是torch.nn.Functional在训练循环中使用方法:

    loss = torch.nn.functional.cross_entropy(output, target, w) 其中 w 将是计算出的自定义重量。

  2. 二是reduction='none'在训练循环外调用损失函数时 使用criterion = torch.nn.CrossEntropy(reduction='none')

    然后在训练循环中乘以自定义权重:

    gt # Ground truth, format torch.long
    pd # Network output
    W # per-element weighting based on the distance map from UNet
    loss = criterion(pd, gt)
    loss = W*loss # Ensure that weights are scaled appropriately
    loss = torch.sum(loss.flatten(start_dim=1), axis=0) # Sums the loss per image
    loss = torch.mean(loss) # Average across a batch
    
    Run Code Online (Sandbox Code Playgroud)

现在,我有点困惑哪个是正确的,或者还有其他方法,还是两者都是正确的?

jch*_*kow 5

加权部分看起来只是简单的加权交叉熵,它是针对类的数量(在下面的示例中为 2)执行的。

weights = torch.FloatTensor([.3, .7])
loss_func = nn.CrossEntropyLoss(weight=weights)
Run Code Online (Sandbox Code Playgroud)

编辑:

您看过Patrick Black 的这个实现吗?

# Set properties
batch_size = 10
out_channels = 2
W = 10
H = 10

# Initialize logits etc. with random
logits = torch.FloatTensor(batch_size, out_channels, H, W).normal_()
target = torch.LongTensor(batch_size, H, W).random_(0, out_channels)
weights = torch.FloatTensor(batch_size, 1, H, W).random_(1, 3)

# Calculate log probabilities
logp = F.log_softmax(logits)

# Gather log probabilities with respect to target
logp = logp.gather(1, target.view(batch_size, 1, H, W))

# Multiply with weights
weighted_logp = (logp * weights).view(batch_size, -1)

# Rescale so that loss is in approx. same interval
weighted_loss = weighted_logp.sum(1) / weights.view(batch_size, -1).sum(1)

# Average over mini-batch
weighted_loss = -1. * weighted_loss.mean()
Run Code Online (Sandbox Code Playgroud)