如何消除棋盘伪影

Ste*_*fan 6 python machine-learning deep-learning tensorflow pytorch

我正在使用完全卷积自动编码器对黑白图像进行着色,但是,输出具有棋盘图案,我想摆脱它。到目前为止,我看到的棋盘伪影总是比我的小得多,消除它们的常用方法是用双线性上采样替换所有非池化操作(有人告诉我)。

但我不能简单地替换反池化操作,因为我处理不同大小的图像,因此需要反池化操作,否则输出张量可能与原始张量不同。

总而言之:

如何在不替换取消池操作的情况下摆脱这些棋盘伪影?

class AE(nn.Module):
    def __init__(self):
        super(AE, self).__init__()
        self.leaky_reLU = nn.LeakyReLU(0.2)
        self.pool = nn.MaxPool2d(kernel_size=2, stride=2, padding=1, return_indices=True)
        self.unpool = nn.MaxUnpool2d(kernel_size=2, stride=2, padding=1)
        self.softmax = nn.Softmax2d()

        self.conv1 = nn.Conv2d(in_channels=3, out_channels=64, kernel_size=3, stride=1, padding=1)
        self.conv2 = nn.Conv2d(in_channels=64, out_channels=128, kernel_size=3, stride=1, padding=1)
        self.conv3 = nn.Conv2d(in_channels=128, out_channels=256, kernel_size=3, stride=1, padding=1)
        self.conv4 = nn.Conv2d(in_channels=256, out_channels=512, kernel_size=3, stride=1, padding=1)
        self.conv5 = nn.Conv2d(in_channels=512, out_channels=1024, kernel_size=3, stride=1, padding=1)
        self.conv6 = nn.ConvTranspose2d(in_channels=1024, out_channels=512, kernel_size=3, stride=1, padding=1)
        self.conv7 = nn.ConvTranspose2d(in_channels=512, out_channels=256, kernel_size=3, stride=1, padding=1)
        self.conv8 = nn.ConvTranspose2d(in_channels=256, out_channels=128, kernel_size=3, stride=1, padding=1)
        self.conv9 = nn.ConvTranspose2d(in_channels=128, out_channels=64, kernel_size=3, stride=1, padding=1)
        self.conv10 = nn.ConvTranspose2d(in_channels=64, out_channels=2, kernel_size=3, stride=1, padding=1)

    def forward(self, x):

        # encoder
        x = self.conv1(x)
        x = self.leaky_reLU(x)
        size1 = x.size()
        x, indices1 = self.pool(x)

        x = self.conv2(x)
        x = self.leaky_reLU(x)
        size2 = x.size()
        x, indices2 = self.pool(x)

        x = self.conv3(x)
        x = self.leaky_reLU(x)
        size3 = x.size()
        x, indices3 = self.pool(x)

        x = self.conv4(x)
        x = self.leaky_reLU(x)
        size4 = x.size()
        x, indices4 = self.pool(x)

        ######################
        x = self.conv5(x)
        x = self.leaky_reLU(x)

        x = self.conv6(x)
        x = self.leaky_reLU(x)
        ######################

        # decoder
        x = self.unpool(x, indices4, output_size=size4)
        x = self.conv7(x)
        x = self.leaky_reLU(x)

        x = self.unpool(x, indices3, output_size=size3)
        x = self.conv8(x)
        x = self.leaky_reLU(x)

        x = self.unpool(x, indices2, output_size=size2)
        x = self.conv9(x)
        x = self.leaky_reLU(x)

        x = self.unpool(x, indices1, output_size=size1)
        x = self.conv10(x)
        x = self.softmax(x)

        return x
Run Code Online (Sandbox Code Playgroud)

神经网络输出

Kau*_*Roy 3

跳跃连接常用于编码器-解码器架构中,它通过将外观信息从编码器(鉴别器)的浅层传递到相应的解码器(生成器)的深层来帮助产生准确的结果。Unet是广泛使用的Encoder-Decoder类型架构。Linknet也很流行,它与Unet的不同之处在于融合编码器层和解码器层的外观信息。在 Unet 的情况下,传入的特征(来自编码器)在相应的解码器层中级联。另一方面,Linknet 执行加法,这也是 Linknet 在单次前向传递中需要较少数量的操作并且比 Unet 快得多的原因。

解码器中的每个卷积块可能如下所示: 在此输入图像描述

另外,我附上一张描绘 Unet 和 LinkNet 架构的图。希望使用跳过连接会有所帮助。

在此输入图像描述