小编Olr*_*mde的帖子

为什么我们将图像归一化为均值= 0.5,标准差= 0.5?

我在Github中寻找GAN代码。我发现的代码使用pytorch。在此代码中,我们首先将图像标准化为均值= 0.5,标准差= 0.5。通常,归一化为min = 0和max =1。或者正态分布的均值为0和std =1。为什么将此归一化为均值= 0.5和std = 0.5?

transformtransfo  = transforms.Compose([
        transforms.ToTensor(),
        transforms.Normalize(mean=(0.5, 0.5, 0.5), std=(0.5, 0.5, 0.5))
])
Run Code Online (Sandbox Code Playgroud)

python computer-vision deep-learning tensorflow pytorch

5
推荐指数
1
解决办法
1094
查看次数

PyTorch NotImplementedError向前

import torch
import torch.nn as nn

device = torch.device('cuda' if torch.cuda.is_available() else 
'cpu')

class Model(nn.Module):
    def __init__(self):
        super(Model, self).__init__()
        self.layer = nn.Sequential(
            nn.Conv2d(1, 16, kernel_size=3, stride=1, padding=1),
            nn.BatchNorm2d(16),
            nn.ReLU(),
            nn.MaxPool2d(kernel_size=2, stride=2), # 16x16x650
            nn.Conv2d(16, 32, kernel_size=3, stride=1, padding=1), # 32x16x650
            nn.ReLU(),
            nn.Dropout2d(0.5),
            nn.Conv2d(32, 64, kernel_size=3, stride=1, padding=1), # 64x16x650
            nn.ReLU(),
            nn.MaxPool2d(kernel_size=2, stride=2), # 64x8x325
            nn.Conv2d(64, 128, kernel_size=3, stride=1, padding=1),
            nn.ReLU()) # 64x8x325

        self.fc = nn.Sequential(
            nn.Linear(64*8*325, 128),
            nn.ReLU(),
            nn.Linear(128, 256),
            nn.ReLU(),
            nn.Linear(256, 1),
        )

        def forward(self, x):
            out = self.layer1(x) …
Run Code Online (Sandbox Code Playgroud)

python deep-learning pytorch

3
推荐指数
2
解决办法
3335
查看次数