RTn*_*our 1 optimization pytorch tensor
我正在尝试使用torch.optim.adam
. 它是redner教程系列中的一段代码,在初始设置下运行良好。它尝试通过将所有顶点移动相同的值来优化场景,称为translation
。这是原始代码:
vertices = []
for obj in base:
vertices.append(obj.vertices.clone())
def model(translation):
for obj, v in zip(base, vertices):
obj.vertices = v + translation
# Assemble the 3D scene.
scene = pyredner.Scene(camera = camera, objects = objects)
# Render the scene.
img = pyredner.render_albedo(scene)
return img
# Initial guess
# Set requires_grad=True since we want to optimize them later
translation = torch.tensor([10.0, -10.0, 10.0], device = pyredner.get_device(), requires_grad=True)
init = model(translation)
# Visualize the initial guess
t_optimizer = torch.optim.Adam([translation], lr=0.5)
Run Code Online (Sandbox Code Playgroud)
我尝试修改代码,使其为每个顶点计算单独的平移。为此,我对上面的代码进行了以下修改,使translation
from的形状torch.Size([3])
变为torch.Size([43380, 3])
:
# translation = torch.tensor([10.0, -10.0, 10.0], device = pyredner.get_device(), requires_grad=True)
translation = base[0].vertices.clone().detach().requires_grad_(True)
translation[:] = 10.0
Run Code Online (Sandbox Code Playgroud)
这介绍了ValueError: can't optimize a non-leaf Tensor
. 你能帮我解决这个问题吗?
PS:很抱歉,文字太长,我对这个主题很陌生,我想尽可能全面地说明问题。
只能优化叶张量。叶张量是在图的开头创建的张量,即图中没有跟踪操作来生成它。换句话说,当您将任何操作应用于张量时,requires_grad=True
它会跟踪这些操作以稍后进行反向传播。您不能将这些中间结果之一提供给优化器。
一个例子更清楚地表明:
weight = torch.randn((2, 2), requires_grad=True)
# => tensor([[ 1.5559, 0.4560],
# [-1.4852, -0.8837]], requires_grad=True)
weight.is_leaf # => True
result = weight * 2
# => tensor([[ 3.1118, 0.9121],
# [-2.9705, -1.7675]], grad_fn=<MulBackward0>)
# grad_fn defines how to do the back propagation (kept track of the multiplication)
result.is_leaf # => False
Run Code Online (Sandbox Code Playgroud)
在result
这个例子中,不能进行优化,因为它不是一个叶张量。同样,在您的情况下,translation
由于您在创建后执行的操作,因此它不是叶张量:
translation[:] = 10.0
translation.is_leaf # => False
Run Code Online (Sandbox Code Playgroud)
这有grad_fn=<CopySlices>
因此它不是一个叶子,你不能将它传递给优化器。为避免这种情况,您必须从中创建一个与图形分离的新张量。
# Not setting requires_grad, so that the next operation is not tracked
translation = base[0].vertices.clone().detach()
translation[:] = 10.0
# Now setting requires_grad so it is tracked in the graph and can be optimised
translation = translation.requires_grad_(True)
Run Code Online (Sandbox Code Playgroud)
您在这里真正要做的是创建一个新的张量,其值为 10.0,其大小与顶点张量相同。这可以更容易地实现torch.full_like
translation = torch.full_like(base[0],vertices, 10.0, requires_grad=True)
Run Code Online (Sandbox Code Playgroud)
归档时间: |
|
查看次数: |
3317 次 |
最近记录: |