Bjö*_*örn 6 machine-learning gradient-descent deep-learning pytorch loss-function
在处理更复杂的事情之前,我知道我必须实现自己的backward通行证,我想尝试一些简单而美好的事情。因此,我尝试使用 PyTorch 进行均方误差损失的线性回归。当我定义自己的backward方法时出错了(参见下面的第三个实现选项),我怀疑这是因为我没有很清楚地考虑我需要将 PyTorch 作为渐变发送什么。所以,我怀疑我需要的是关于 PyTorch 希望我在这里以何种形式提供的一些解释/澄清/建议。
我正在使用 PyTorch 1.7.0,因此一堆旧示例不再有效(使用文档中描述的用户定义的 autograd 函数的不同方式)。
让我们首先在没有自定义损失函数的情况下使用标准方法:
import torch
import torch.nn as nn
import torch.nn.functional as F
# Let's generate some fake data
torch.manual_seed(42)
resid = torch.rand(100)    
inputs = torch.tensor([ [ xx ] for xx in range(100)] , dtype=torch.float32)
labels = torch.tensor([ (2 + 0.5*yy + resid[yy]) for yy in range(100)], dtype=torch.float32)
# Now we define a linear regression model
class linearRegression(torch.nn.Module):
    def __init__(self, inputSize, outputSize):
        super(linearRegression, self).__init__()
        self.bn = torch.nn.BatchNorm1d(num_features=1)
        self.linear = torch.nn.Linear(inputSize, outputSize)
    def forward(self, inx):
        x = self.bn(inx) # Adding BN to standardize input helps us use a higher learning rate
        x = self.linear(x)
        return x
    
model = linearRegression(1, 1)     
# Using the standard mse_loss of PyTorch
epochs = 25    
mseloss = F.mse_loss
optimizer = torch.optim.SGD(model.parameters(), lr=1e-2, momentum=1e-3)
scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=20, gamma=0.1)
for epoch in range(epochs):
    model.train()
    optimizer.zero_grad()
    outputs = model(inputs)
    loss = mseloss(outputs.view(-1), labels)
    loss.backward()
    optimizer.step()
    scheduler.step()        
    print(f'epoch {epoch}, loss {loss}')    
    
这列火车很好,我损失了大约 0.0824,拟合图看起来很好。
所以,现在我用我自己的 MSE 损失实现替换了损失函数,但我仍然依赖 PyTorch autograd。我在这里唯一改变的是定义自定义损失函数,相应地定义基于此的损失,以及我如何将预测和真实标签交给损失函数的小细节。
#######################################################3
class MyMSELoss(nn.Module):
    
    def __init__(self):
        super(MyMSELoss, self).__init__()
    def forward(self, inputs, targets):        
        tmp = (inputs-targets)**2
        loss =  torch.mean(tmp)        
        return loss
#######################################################3
model = linearRegression(1, 1) 
    
mseloss = MyMSELoss()
optimizer = torch.optim.SGD(model.parameters(), lr=1e-2, momentum=1e-3)
scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=20, gamma=0.1)
for epoch in range(epochs):
    model.train()    
    outputs = model(inputs)
    loss = mseloss(outputs.view(-1), labels)
    loss.backward()    
    optimizer.step()
    optimizer.zero_grad()
    scheduler.step()
    
    print(f'epoch {epoch}, loss {loss}')
这给出了与使用标准 MSE 损失函数完全相同的结果。历元的损失看起来像这样:
epoch 0, loss 884.2006225585938
epoch 1, loss 821.930908203125
epoch 2, loss 718.7732543945312
epoch 3, loss 538.1835327148438
epoch 4, loss 274.50909423828125
epoch 5, loss 55.115299224853516
epoch 6, loss 2.405021905899048
epoch 7, loss 0.47621214389801025
epoch 8, loss 0.1584305614233017
epoch 9, loss 0.09725229442119598
epoch 10, loss 0.0853077694773674
epoch 11, loss 0.08297089487314224
epoch 12, loss 0.08251354098320007
epoch 13, loss 0.08242412656545639
epoch 14, loss 0.08240655809640884
epoch 15, loss 0.08240310847759247
epoch 16, loss 0.08240246027708054
epoch 17, loss 0.08240233361721039
epoch 18, loss 0.08240240067243576
epoch 19, loss 0.08240223675966263
epoch 20, loss 0.08240225911140442
epoch 21, loss 0.08240220695734024
epoch 22, loss 0.08240220695734024
epoch 23, loss 0.08240220695734024
epoch 24, loss 0.08240220695734024
现在是最终版本,我在其中为 MSE 实现了自己的渐变。为此,我backward在损失函数类中定义了自己的方法,显然需要做mseloss = MyMSELoss.apply.
from torch.autograd import Function
#######################################################
class MyMSELoss(Function):
    
    @staticmethod
    def forward(ctx, y_pred, y):    
        ctx.save_for_backward(y_pred, y)
        return ( (y - y_pred)**2 ).mean()
    
    @staticmethod
    def backward(ctx, grad_output):
        y_pred, y = ctx.saved_tensors
        grad_input = torch.mean( -2.0 * (y - y_pred)).repeat(y_pred.shape[0])        
        # This fails, as does grad_input = -2.0 * (y-y_pred)
        # I've also messed around with the sign and that's not the sole problem, either.
        return grad_input, None
    
#######################################################
    
model = linearRegression(1, 1) 
mseloss = MyMSELoss.apply
optimizer = torch.optim.SGD(model.parameters(), lr=1e-2, momentum=1e-3)
scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=20, gamma=0.1)
for epoch in range(epochs):
    model.train()
    outputs = model(inputs)
    loss = mseloss(outputs.view(-1), labels)
    loss.backward()
    optimizer.step()
    optimizer.zero_grad()
    scheduler.step()
    print(f'epoch {epoch}, loss {loss}')    
这就是事情出错的地方,而不是训练损失减少,我得到了增加的训练损失。现在它看起来像这样:
epoch 0, loss 884.2006225585938
epoch 1, loss 3471.384033203125
epoch 2, loss 47768555520.0
epoch 3, loss 1.7422577779621402e+33
epoch 4, loss inf
epoch 5, loss nan
epoch 6, loss nan
epoch 7, loss nan
epoch 8, loss nan
epoch 9, loss nan
epoch 10, loss nan
epoch 11, loss nan
epoch 12, loss nan
epoch 13, loss nan
epoch 14, loss nan
epoch 15, loss nan
epoch 16, loss nan
epoch 17, loss nan
epoch 18, loss nan
epoch 19, loss nan
epoch 20, loss nan
epoch 21, loss nan
epoch 22, loss nan
epoch 23, loss nan
epoch 24, loss nan
(2为常数,可忽略)
因此,将您的后向函数更改为:
@staticmethod
def backward(ctx, grad_output):
    y_pred, y = ctx.saved_tensors
    grad_input = 2 * (y_pred - y) / y_pred.shape[0]        
    return grad_input, None
| 归档时间: | 
 | 
| 查看次数: | 1010 次 | 
| 最近记录: |