Dar*_*ici 7 python conv-neural-network pytorch
我正在使用 Pytorch 对一系列图像进行分类。神经网络定义如下:
model = models.vgg16(pretrained=True)
model.cuda()
for param in model.parameters(): param.requires_grad = False
classifier = nn.Sequential(OrderedDict([
('fc1', nn.Linear(25088, 4096)),
('relu', nn.ReLU()),
('fc2', nn.Linear(4096, 102)),
('output', nn.LogSoftmax(dim=1))
]))
model.classifier = classifier
Run Code Online (Sandbox Code Playgroud)
标准和优化器如下:
criterion = nn.NLLLoss()
optimizer = optim.Adam(model.classifier.parameters(), lr=0.001)
Run Code Online (Sandbox Code Playgroud)
我的验证函数如下:
def validation(model, testloader, criterion):
test_loss = 0
accuracy = 0
for images, labels in testloader:
images.resize_(images.shape[0], 784)
output = model.forward(images)
test_loss += criterion(output, labels).item()
ps = torch.exp(output)
equality = (labels.data == ps.max(dim=1)[1])
accuracy += equality.type(torch.FloatTensor).mean()
return test_loss, accuracy
Run Code Online (Sandbox Code Playgroud)
这是引发以下错误的代码片段:
运行时错误:输入的尺寸小于预期
epochs = 3
print_every = 40
steps = 0
running_loss = 0
testloader = dataloaders['test']
# change to cuda
model.to('cuda')
for e in range(epochs):
running_loss = 0
for ii, (inputs, labels) in enumerate(dataloaders['train']):
steps += 1
inputs, labels = inputs.to('cuda'), labels.to('cuda')
optimizer.zero_grad()
# Forward and backward passes
outputs = model.forward(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
running_loss += loss.item()
if steps % print_every == 0:
model.eval()
with torch.no_grad():
test_loss, accuracy = validation(model, testloader, criterion)
print("Epoch: {}/{}.. ".format(e+1, epochs),
"Training Loss: {:.3f}.. ".format(running_loss/print_every),
"Test Loss: {:.3f}.. ".format(test_loss/len(testloader)),
"Test Accuracy: {:.3f}".format(accuracy/len(testloader)))
running_loss = 0
Run Code Online (Sandbox Code Playgroud)
有什么帮助吗?
以防万一它对某人有帮助。
如果您没有 GPU 系统(假设您正在笔记本电脑上进行开发,并且最终将在具有 GPU 的服务器上进行测试),您可以使用以下方法执行相同操作:
if torch.cuda.is_available():
inputs =inputs.to('cuda')
else:
inputs = inputs.to('cuda')
Run Code Online (Sandbox Code Playgroud)
另外,如果您想知道为什么有LogSoftmax,而不是Softmax因为他使用 NLLLoss 作为损失函数。您可以在这里阅读有关 softmax 的更多信息