PyTorch 如何计算二阶雅可比行列式?

alx*_*yok 6 python gradient pytorch

我有一个计算向量的神经网络u。我想计算关于 input 的x一阶和二阶雅可比,单个元素。

有人知道如何在 PyTorch 中做到这一点吗?下面是我项目中的代码片段:

import torch
import torch.nn as nn

class PINN(torch.nn.Module):
    
    def __init__(self, layers:list):
        super(PINN, self).__init__()
        self.linears = nn.ModuleList([])
        for i, dim in enumerate(layers[:-2]):
            self.linears.append(nn.Linear(dim, layers[i+1]))
            self.linears.append(nn.ReLU())
        self.linears.append(nn.Linear(layers[-2], layers[-1]))
        
    def forward(self, x):
        for layer in self.linears:
            x = layer(x)
        return x
Run Code Online (Sandbox Code Playgroud)

然后我实例化我的网络:

n_in = 1
units = 50
q = 500

pinn = PINN([n_in, units, units, units, q+1])
pinn
Run Code Online (Sandbox Code Playgroud)

哪个返回

PINN(
  (linears): ModuleList(
    (0): Linear(in_features=1, out_features=50, bias=True)
    (1): ReLU()
    (2): Linear(in_features=50, out_features=50, bias=True)
    (3): ReLU()
    (4): Linear(in_features=50, out_features=50, bias=True)
    (5): ReLU()
    (6): Linear(in_features=50, out_features=501, bias=True)
  )
)
Run Code Online (Sandbox Code Playgroud)

然后我计算 FO 和 SO jacobians

x = torch.randn(1, requires_grad=False)

u_x = torch.autograd.functional.jacobian(pinn, x, create_graph=True)
print("First Order Jacobian du/dx of shape {}, and features\n{}".format(u_x.shape, u_x)

u_xx = torch.autograd.functional.jacobian(lambda _: u_x, x)
print("Second Order Jacobian du_x/dx of shape {}, and features\n{}".format(u_xx.shape, u_xx)
Run Code Online (Sandbox Code Playgroud)

退货

First Order Jacobian du/dx of shape torch.Size([501, 1]), and features
tensor([[-0.0310],
        [ 0.0139],
        [-0.0081],
        [-0.0248],
        [-0.0033],
        [ 0.0013],
        [ 0.0040],
        [ 0.0273],
        ...
        [-0.0197]], grad_fn=<ViewBackward>)
Run Code Online (Sandbox Code Playgroud)
Second Order Jacobian du/dx of shape torch.Size([501, 1, 1]), and features
tensor([[[0.]],

        [[0.]],

        [[0.]],

        [[0.]],

        ...

        [[0.]]])
Run Code Online (Sandbox Code Playgroud)

如果它不依赖于 ,它不应该u_xx是一个None向量x吗?

提前致谢

alx*_*yok 2

因此,正如 @jodag 在他的评论中提到的,ReLU如果为空或线性,则其梯度是恒定的(除了 on 0,这是一个罕见的事件),因此其二阶导数为零。我将激活函数更改为Tanh,这最终允许我计算雅可比矩阵两次。

最终代码是

import torch
import torch.nn as nn

class PINN(torch.nn.Module):
    
    def __init__(self, layers:list):
        super(PINN, self).__init__()
        self.linears = nn.ModuleList([])
        for i, dim in enumerate(layers[:-2]):
            self.linears.append(nn.Linear(dim, layers[i+1]))
            self.linears.append(nn.Tanh())
        self.linears.append(nn.Linear(layers[-2], layers[-1]))
        
    def forward(self, x):
        for layer in self.linears:
            x = layer(x)
        return x
        
    def compute_u_x(self, x):
        self.u_x = torch.autograd.functional.jacobian(self, x, create_graph=True)
        self.u_x = torch.squeeze(self.u_x)
        return self.u_x
    
    def compute_u_xx(self, x):
        self.u_xx = torch.autograd.functional.jacobian(self.compute_u_x, x)
        self.u_xx = torch.squeeze(self.u_xx)
        return self.u_xx
Run Code Online (Sandbox Code Playgroud)

然后调用with setcompute_u_xx(x)的一个实例来让我到达那里。不过,如何摆脱 引入的无用维度仍有待理解......PINNx.require_gradTruetorch.autograd.functional.jacobian