我有一个 pytorch 网络,已经过训练并且权重已更新(完整训练)。
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.fc1 = nn.Linear(1, H)
self.fc2 = nn.Linear(1, H)
self.fc3 = nn.Linear(H, 1)
def forward(self, x, y):
h1 = F.relu(self.fc1(x)+self.fc2(y))
h2 = self.fc3(h1)
return h2
Run Code Online (Sandbox Code Playgroud)
训练后,我想最大化网络相对于输入的输出。换句话说,我想优化输入以最大化神经网络输出,而不改变权重。我怎样才能做到这一点。我的尝试,但没有意义:
in = torch.autograd.Variable(x)
out = Net(in)
grad = torch.autograd.grad(out, input)
Run Code Online (Sandbox Code Playgroud) 在张量流中,我们可以在顺序模型中添加 L1 或 L2 正则化。我在 pytorch 中找不到等效的方法。我们如何在网络定义中为 pytorch 中的权重添加正则化:
class Net(torch.nn.Module):
def __init__(self, n_feature, n_hidden, n_output):
super(Net, self).__init__()
self.hidden = torch.nn.Linear(n_feature, n_hidden) # hidden layer
""" How to add a L1 regularization after a certain hidden layer?? """
""" OR How to add a L1 regularization after a certain hidden layer?? """
self.predict = torch.nn.Linear(n_hidden, n_output) # output layer
def forward(self, x):
x = F.relu(self.hidden(x)) # activation function for hidden layer
x = self.predict(x) # linear output
return x
net …Run Code Online (Sandbox Code Playgroud)