在Pytorch中似乎有几种创建张量副本的方法,包括
y = tensor.new_tensor(x) #a
y = x.clone().detach() #b
y = torch.empty_like(x).copy_(x) #c
y = torch.tensor(x) #d
Run Code Online (Sandbox Code Playgroud)
b明确优于a并d根据UserWarning如果我既可以执行我得到a或d。为什么首选它?性能?我认为它的可读性较差。
有/反对使用任何理由c?
我可以更改需要 grad 的张量的值,而 autograd 不知道它:
def error_unexpected_way_to_by_pass_safety():
import torch
a = torch.tensor([1,2,3.], requires_grad=True)
# are detached tensor's leafs? yes they are
a_detached = a.detach()
#a.fill_(2) # illegal, warns you that a tensor which requires grads is used in an inplace op (so it won't be recorded in computation graph so it wont take the right derivative of the forward path as this op won't be in it)
a_detached.fill_(2) # weird that this one is allowed, seems to allow me to …Run Code Online (Sandbox Code Playgroud)