为什么autograd不会为中间变量产生梯度?

foo*_*bar 10 pytorch autograd

试图围绕如何表示渐变以及autograd如何工作:

import torch
from torch.autograd import Variable

x = Variable(torch.Tensor([2]), requires_grad=True)
y = x * x
z = y * y

z.backward()

print(x.grad)
#Variable containing:
#32
#[torch.FloatTensor of size 1]

print(y.grad)
#None
Run Code Online (Sandbox Code Playgroud)

为什么它不会产生渐变y?如果y.grad = dz/dy,那么它不应该至少产生一个变量y.grad = 2*y吗?

T. *_*arf 14

默认情况下,仅为叶子变量保留渐变.非叶子变量的梯度不会被保留以便稍后检查.这是通过设计完成的,以节省内存.

-soumith chintala

请参阅:https://discuss.pytorch.org/t/why-cant-i-see-grad-of-an-intermediate-variable/94

选项1:

呼叫 y.retain_grad()

x = Variable(torch.Tensor([2]), requires_grad=True)
y = x * x
z = y * y

y.retain_grad()

z.backward()

print(y.grad)
#Variable containing:
# 8
#[torch.FloatTensor of size 1]
Run Code Online (Sandbox Code Playgroud)

资料来源:https://discuss.pytorch.org/t/why-cant-i-see-grad-of-an-intermediate-variable/94/16

选项2:

注册a hook,这基本上是计算该梯度时调用的函数.然后你可以保存,分配,打印,无论......

from __future__ import print_function
import torch
from torch.autograd import Variable

x = Variable(torch.Tensor([2]), requires_grad=True)
y = x * x
z = y * y

y.register_hook(print) ## this can be anything you need it to be

z.backward()
Run Code Online (Sandbox Code Playgroud)

输出:

Variable containing:  8 [torch.FloatTensor of size 1
Run Code Online (Sandbox Code Playgroud)

资料来源:https://discuss.pytorch.org/t/why-cant-i-see-grad-of-an-intermediate-variable/94/2

另见:https://discuss.pytorch.org/t/why-cant-i-see-grad-of-an-intermediate-variable/94/7

  • 谢谢不知道retain_grad()方法 (2认同)