PyTorch中的双层神经网络无法融合

Mr.*_*bot 3 deep-learning pytorch

问题

我正在尝试使用不同的方法(TensorFlow,PyTorch和从头开始)实现2层神经网络,然后基于MNIST数据集比较它们的性能.

我不确定我犯了什么错误,但PyTorch的准确率只有10%左右,这基本上是随机猜测.我想权重可能根本没有更新.

请注意,我故意使用TensorFlow提供的数据集来保持我使用的数据通过3种不同的方法保持一致,以便进行准确比较.

from tensorflow.examples.tutorials.mnist import input_data
import torch

class Net(torch.nn.Module):
    def __init__(self):
      super(Net, self).__init__()
      self.fc1 =  torch.nn.Linear(784, 100)
      self.fc2 =  torch.nn.Linear(100, 10)

    def forward(self, x):
      # x -> (batch_size, 784)
      x = torch.relu(x)
      # x -> (batch_size, 10)
      x = torch.softmax(x, dim=1)
      return x

net = Net()
net.zero_grad()
Loss = torch.nn.CrossEntropyLoss()
optimizer =  torch.optim.SGD(net.parameters(), lr=0.01)

for epoch in range(1000):  # loop over the dataset multiple times

    batch_xs, batch_ys = mnist_m.train.next_batch(100)
    # convert to appropriate settins
    # note the input to the linear layer should be (n_sample, n_features)
    batch_xs = torch.tensor(batch_xs, requires_grad=True)
    # batch_ys -> (batch_size,)
    batch_ys = torch.tensor(batch_ys, dtype=torch.int64)

    # forward
    # output -> (batch_size, 10)
    output = net(batch_xs)
    # result -> (batch_size,)
    result = torch.argmax(output, dim=1)
    loss = Loss(output, batch_ys)

    # backward
    optimizer.zero_grad()
    loss.backward()
    optimizer.step()
Run Code Online (Sandbox Code Playgroud)

blu*_*nox 5

这里的问题是,你不适用你的完全连接层fc1fc2.

forward()目前的样子:

def forward(self, x):
    # x -> (batch_size, 784)
    x = torch.relu(x)
    # x -> (batch_size, 10)
    x = torch.softmax(x, dim=1)
    return x
Run Code Online (Sandbox Code Playgroud)

因此,如果您将其更改为:

def forward(self, x):
    # x -> (batch_size, 784)
    x = self.fc1(x)             # added layer fc1
    x = torch.relu(x)  

    # x -> (batch_size, 10)
    x = self.fc2(x)             # added layer fc2
    x = torch.softmax(x, dim=1)
    return x
Run Code Online (Sandbox Code Playgroud)

它应该工作.

关于Umang Guptas回答:正如我所看到的那样,zero_grad()在召唤backward()罗伯特先生之前打电话,这很好.这应该不是问题.


编辑:

所以我做了一个简短的测试 - 我设置了迭代1000,10000以便看到更大的图片,如果它真的在减少.(当然我也加载了数据,mnist_m因为这不包含在您发布的代码中)

我在代码中添加了一个打印条件:

if epoch % 1000 == 0:
    print('Epoch', epoch, '- Loss:', round(loss.item(), 3))
Run Code Online (Sandbox Code Playgroud)

其中列出了每次1000迭代的损失:

Epoch 0 - Loss: 2.305
Epoch 1000 - Loss: 2.263
Epoch 2000 - Loss: 2.187
Epoch 3000 - Loss: 2.024
Epoch 4000 - Loss: 1.819
Epoch 5000 - Loss: 1.699
Epoch 6000 - Loss: 1.699
Epoch 7000 - Loss: 1.656
Epoch 8000 - Loss: 1.675
Epoch 9000 - Loss: 1.659
Run Code Online (Sandbox Code Playgroud)

使用PyTorch版本0.4.1进行测试

所以你可以看到,随着forward()网络正在学习的改变,我留下的其余代码都没有改变.

祝你好运!