Nak*_*hya 6 python gradient-descent conv-neural-network pytorch autograd
我正在尝试通过实施本文中描述的加权损失方法来改进我制作的 CNN 。为此,我研究了这个笔记本,它实现了论文中描述的方法的伪代码。
RuntimeError: One of the differentiated Tensors appears to not have been used in the graph. Set allow_unused=True if this is the desired behavior当将他们的代码转换为我的模型时,我在使用时遇到了错误torch.autograd.grad()。
我的代码和错误位于倒数第二行:
for epoch in range(1): #tqdm(range(params['epochs'])):
model.train()
text_t, labels_t = next(iter(train_iterator))
text_t = to_var(text_t, requires_grad=False)
labels_t = to_var(labels_t, requires_grad=False)
dummy = L2RWCNN(INPUT_DIM, EMBEDDING_DIM, N_FILTERS, FILTER_SIZES, OUTPUT_DIM,
DROPOUT, PAD_IDX)
dummy.state_dict(model.state_dict())
dummy.cuda()
y_f_hat = dummy(text_t)
cost = F.binary_cross_entropy_with_logits(y_f_hat.squeeze(), labels_t, reduce = False)
eps = to_var(torch.zeros(cost.size()))
l_f_meta = torch.sum(cost * eps)
dummy.zero_grad()
num_params = 0
grads = torch.autograd.grad(l_f_meta, (dummy.params()), create_graph = True)
with torch.no_grad():
for p, grad in zip(dummy.parameters(), grads):
tmp = p - params['lr'] * grad
p.copy_(tmp)
text_v, labels_v = next(iter(valid_iterator))
y_g_hat = dummy(text_v)
l_g_meta = F.binary_cross_entropy_with_logits(y_g_hat.squeeze(), labels_v, reduce = False)
l_g_meta = torch.sum(l_g_meta)
grad_eps = torch.autograd.grad(l_g_meta, eps, only_inputs=True)[0]
print(grad_eps)
Run Code Online (Sandbox Code Playgroud)
我认为该错误是因为eps之前的任何调用中都没有torch.autograd.grad()。我尝试了建议的设置解决方案allow_unused=True,但这产生了一个None值。我查看了这篇文章以找到解决方案,但是此处解决问题的方法(不要对张量进行切片)对我不起作用,因为我没有传递任何部分变量。我也尝试create_graph = False在第一次autograd.grad()通话中进行设置,但这并没有解决问题。有没有人有办法解决吗?
编辑:创建了一个新帖子,其措辞与此处的问题不同