grad_outputs in torch.autograd.grad (CrossEntropyLoss)

Aer*_*rin 5 pytorch autograd

I’m trying to get d(loss)/d(input). I know I have 2 options.

First option:

    loss.backward()
    dlossdx = x.grad.data
Run Code Online (Sandbox Code Playgroud)

Second option:

    # criterion = nn.CrossEntropyLoss(reduce=False)
    # loss = criterion(y_hat, labels)     
    # No need to call backward. 
    dlossdx = torch.autograd.grad(outputs = loss,
                                  inputs = x,
                                  grad_outputs = ? )
Run Code Online (Sandbox Code Playgroud)

My question is: if I use cross-entropy loss, what should I pass as grad_outputs in the second option?

Do I put d(CE)/d(y_hat)? Since pytorch crossentropy contains softmax, this will require me to pre-calculate softmax derivative using Kronecker delta.

Or do I put d(CE)/d(CE) which is torch.ones_like?

A conceptual answer is fine.

Uma*_*pta 2

让我们尝试了解这两个选项是如何工作的。

我们将使用这个设置

import torch 
import torch.nn as nn
import numpy as np 
x = torch.rand((64,10), requires_grad=True)
net = nn.Sequential(nn.Linear(10,10))
labels = torch.tensor(np.random.choice(10, size=64)).long()
criterion = nn.CrossEntropyLoss()
Run Code Online (Sandbox Code Playgroud)

第一个选项

loss = criterion(net(x), labels)
loss.backward(retain_graph=True)
dloss_dx = x.grad
Run Code Online (Sandbox Code Playgroud)

请注意,您没有向梯度传递任何选项,因为如果您将损失计算为向量,则损失是一个标量,那么您必须通过

第二个选择

dloss_dx2 = torch.autograd.grad(loss, x)
Run Code Online (Sandbox Code Playgroud)

这将返回一个元组,您可以使用第一个元素作为 x 的梯度。

请注意,torch.autograd.grad如果将多个输出作为元组传递,则返回 doout/dx 的总和。但由于损失是标量,因此您不需要传递,grad_outputs因为默认情况下它会认为它是一个。