在 PyTorch 中计算每个 epoch 的准确率

H.S*_*H.S 11 neural-network pytorch

我正在研究神经网络问题,将数据分类为 1 或 0。我使用二进制交叉熵损失来做到这一点。损失很好,但是,准确性非常低并且没有提高。我假设我在精度计算中犯了一个错误。在每个时期之后,我在对输出进行阈值处理后计算正确的预测,并将该数字除以数据集的总数。我在精度计算中做错了什么吗?为什么它没有改善,反而变得更糟?这是我的代码:

net = Model()
criterion = torch.nn.BCELoss(size_average=True)   
optimizer = torch.optim.SGD(net.parameters(), lr=0.1)

num_epochs = 100
for epoch in range(num_epochs):
    for i, (inputs,labels) in enumerate (train_loader):
        inputs = Variable(inputs.float())
        labels = Variable(labels.float())
        output = net(inputs)
        optimizer.zero_grad()
        loss = criterion(output, labels)
        loss.backward()
        optimizer.step()

    #Accuracy
    output = (output>0.5).float()
    correct = (output == labels).float().sum()
    print("Epoch {}/{}, Loss: {:.3f}, Accuracy: {:.3f}".format(epoch+1,num_epochs, loss.data[0], correct/x.shape[0]))
Run Code Online (Sandbox Code Playgroud)

这是我得到的奇怪输出:

Epoch 1/100, Loss: 0.389, Accuracy: 0.035
Epoch 2/100, Loss: 0.370, Accuracy: 0.036
Epoch 3/100, Loss: 0.514, Accuracy: 0.030
Epoch 4/100, Loss: 0.539, Accuracy: 0.030
Epoch 5/100, Loss: 0.583, Accuracy: 0.029
Epoch 6/100, Loss: 0.439, Accuracy: 0.031
Epoch 7/100, Loss: 0.429, Accuracy: 0.034
Epoch 8/100, Loss: 0.408, Accuracy: 0.035
Epoch 9/100, Loss: 0.316, Accuracy: 0.035
Epoch 10/100, Loss: 0.436, Accuracy: 0.035
Epoch 11/100, Loss: 0.365, Accuracy: 0.034
Epoch 12/100, Loss: 0.485, Accuracy: 0.031
Epoch 13/100, Loss: 0.392, Accuracy: 0.033
Epoch 14/100, Loss: 0.494, Accuracy: 0.030
Epoch 15/100, Loss: 0.369, Accuracy: 0.035
Epoch 16/100, Loss: 0.495, Accuracy: 0.029
Epoch 17/100, Loss: 0.415, Accuracy: 0.034
Epoch 18/100, Loss: 0.410, Accuracy: 0.035
Epoch 19/100, Loss: 0.282, Accuracy: 0.038
Epoch 20/100, Loss: 0.499, Accuracy: 0.031
Epoch 21/100, Loss: 0.446, Accuracy: 0.030
Epoch 22/100, Loss: 0.585, Accuracy: 0.026
Epoch 23/100, Loss: 0.419, Accuracy: 0.035
Epoch 24/100, Loss: 0.492, Accuracy: 0.031
Epoch 25/100, Loss: 0.537, Accuracy: 0.031
Epoch 26/100, Loss: 0.439, Accuracy: 0.033
Epoch 27/100, Loss: 0.421, Accuracy: 0.035
Epoch 28/100, Loss: 0.532, Accuracy: 0.034
Epoch 29/100, Loss: 0.234, Accuracy: 0.038
Epoch 30/100, Loss: 0.492, Accuracy: 0.027
Epoch 31/100, Loss: 0.407, Accuracy: 0.035
Epoch 32/100, Loss: 0.305, Accuracy: 0.038
Epoch 33/100, Loss: 0.663, Accuracy: 0.025
Epoch 34/100, Loss: 0.588, Accuracy: 0.031
Epoch 35/100, Loss: 0.329, Accuracy: 0.035
Epoch 36/100, Loss: 0.474, Accuracy: 0.033
Epoch 37/100, Loss: 0.535, Accuracy: 0.031
Epoch 38/100, Loss: 0.406, Accuracy: 0.033
Epoch 39/100, Loss: 0.513, Accuracy: 0.030
Epoch 40/100, Loss: 0.593, Accuracy: 0.030
Epoch 41/100, Loss: 0.265, Accuracy: 0.036
Epoch 42/100, Loss: 0.576, Accuracy: 0.031
Epoch 43/100, Loss: 0.565, Accuracy: 0.027
Epoch 44/100, Loss: 0.576, Accuracy: 0.030
Epoch 45/100, Loss: 0.396, Accuracy: 0.035
Epoch 46/100, Loss: 0.423, Accuracy: 0.034
Epoch 47/100, Loss: 0.489, Accuracy: 0.033
Epoch 48/100, Loss: 0.591, Accuracy: 0.029
Epoch 49/100, Loss: 0.415, Accuracy: 0.034
Epoch 50/100, Loss: 0.291, Accuracy: 0.039
Epoch 51/100, Loss: 0.395, Accuracy: 0.033
Epoch 52/100, Loss: 0.540, Accuracy: 0.026
Epoch 53/100, Loss: 0.436, Accuracy: 0.033
Epoch 54/100, Loss: 0.346, Accuracy: 0.036
Epoch 55/100, Loss: 0.519, Accuracy: 0.029
Epoch 56/100, Loss: 0.456, Accuracy: 0.031
Epoch 57/100, Loss: 0.425, Accuracy: 0.035
Epoch 58/100, Loss: 0.311, Accuracy: 0.039
Epoch 59/100, Loss: 0.406, Accuracy: 0.034
Epoch 60/100, Loss: 0.360, Accuracy: 0.035
Epoch 61/100, Loss: 0.476, Accuracy: 0.030
Epoch 62/100, Loss: 0.404, Accuracy: 0.034
Epoch 63/100, Loss: 0.382, Accuracy: 0.036
Epoch 64/100, Loss: 0.538, Accuracy: 0.031
Epoch 65/100, Loss: 0.392, Accuracy: 0.034
Epoch 66/100, Loss: 0.434, Accuracy: 0.033
Epoch 67/100, Loss: 0.479, Accuracy: 0.031
Epoch 68/100, Loss: 0.494, Accuracy: 0.031
Epoch 69/100, Loss: 0.415, Accuracy: 0.034
Epoch 70/100, Loss: 0.390, Accuracy: 0.036
Epoch 71/100, Loss: 0.330, Accuracy: 0.038
Epoch 72/100, Loss: 0.449, Accuracy: 0.030
Epoch 73/100, Loss: 0.315, Accuracy: 0.039
Epoch 74/100, Loss: 0.450, Accuracy: 0.031
Epoch 75/100, Loss: 0.562, Accuracy: 0.030
Epoch 76/100, Loss: 0.447, Accuracy: 0.031
Epoch 77/100, Loss: 0.408, Accuracy: 0.038
Epoch 78/100, Loss: 0.359, Accuracy: 0.034
Epoch 79/100, Loss: 0.372, Accuracy: 0.035
Epoch 80/100, Loss: 0.452, Accuracy: 0.034
Epoch 81/100, Loss: 0.360, Accuracy: 0.035
Epoch 82/100, Loss: 0.453, Accuracy: 0.031
Epoch 83/100, Loss: 0.578, Accuracy: 0.030
Epoch 84/100, Loss: 0.537, Accuracy: 0.030
Epoch 85/100, Loss: 0.483, Accuracy: 0.035
Epoch 86/100, Loss: 0.343, Accuracy: 0.036
Epoch 87/100, Loss: 0.439, Accuracy: 0.034
Epoch 88/100, Loss: 0.686, Accuracy: 0.023
Epoch 89/100, Loss: 0.265, Accuracy: 0.039
Epoch 90/100, Loss: 0.369, Accuracy: 0.035
Epoch 91/100, Loss: 0.521, Accuracy: 0.027
Epoch 92/100, Loss: 0.662, Accuracy: 0.027
Epoch 93/100, Loss: 0.581, Accuracy: 0.029
Epoch 94/100, Loss: 0.322, Accuracy: 0.034
Epoch 95/100, Loss: 0.375, Accuracy: 0.035
Epoch 96/100, Loss: 0.575, Accuracy: 0.031
Epoch 97/100, Loss: 0.489, Accuracy: 0.030
Epoch 98/100, Loss: 0.435, Accuracy: 0.033
Epoch 99/100, Loss: 0.440, Accuracy: 0.031
Epoch 100/100, Loss: 0.444, Accuracy: 0.033
Run Code Online (Sandbox Code Playgroud)

Lak*_*rma 11

x整个输入数据集吗?如果是这样,您可能要除以整个输入数据集correct/x.shape[0]的大小(而不是小批量的大小)。尝试将其更改为correct/output.shape[0]


ale*_*lik 11

更好的方法是在优化步骤后立即计算正确

for epoch in range(num_epochs):

    correct = 0
    for i, (inputs,labels) in enumerate (train_loader):
        ...
        output = net(inputs)
        ...
        optimizer.step()

        correct += (output == labels).float().sum()

    accuracy = 100 * correct / len(trainset)
    # trainset, not train_loader
    # probably x in your case

    print("Accuracy = {}".format(accuracy))
Run Code Online (Sandbox Code Playgroud)

  • @CharlieParker .item() 当张量中恰好有 1 个值时起作用。否则会报错。(output == labels) 是一个具有许多值的布尔张量,通过将其转换为浮点数,False 被转换为 0,True 被转换为 1。然后我们对 True 的数量求和 (.sum() 本身可能就足够了,因为它应该做铸造的事情) (2认同)

Cha*_*ker 7

只需阅读这个答案:

/sf/answers/4428970171/


老的

我认为最简单的答案是cifar10 教程中的答案:

total = 0
with torch.no_grad():
    net.eval()
    for data in testloader:
        images, labels = data
        outputs = net(images)
        _, predicted = torch.max(outputs.data, 1)
        total += labels.size(0)
        correct += (predicted == labels).sum().item()

print('Accuracy of the network on the 10000 test images: %d %%' % (
    100 * correct / total))
Run Code Online (Sandbox Code Playgroud)

所以:

acc = (true == pred).sum().item()
Run Code Online (Sandbox Code Playgroud)

如果您有计数器,请不要忘记最终除以数据集或类似值的大小。

我用过:

N = data.size(0) # since usually it's size (batch_size, D1, D2, ...)
correct += (1/N) * correct
Run Code Online (Sandbox Code Playgroud)

自包含代码:

# testing accuracy function
# https://discuss.pytorch.org/t/calculating-accuracy-of-the-current-minibatch/4308/11
# /sf/ask/3605269601/

import torch
import torch.nn as nn

D = 1
true = torch.tensor([0,1,0,1,1]).reshape(5,1)
print(f'true.size() = {true.size()}')

batch_size = true.size(0)
print(f'batch_size = {batch_size}')
x = torch.randn(batch_size,D)
print(f'x = {x}')
print(f'x.size() = {x.size()}')

mdl = nn.Linear(D,1)
logit = mdl(x)
_, pred = torch.max(logit.data, 1)

print(f'logit = {logit}')

print(f'pred = {pred}')
print(f'true = {true}')

acc = (true == pred).sum().item()
print(f'acc = {acc}')
Run Code Online (Sandbox Code Playgroud)

另外,我发现这段代码是很好的参考:

def calc_accuracy(mdl, X, Y):
    # reduce/collapse the classification dimension according to max op
    # resulting in most likely label
    max_vals, max_indices = mdl(X).max(1)
    # assumes the first dimension is batch size
    n = max_indices.size(0)  # index 0 for extracting the # of elements
    # calulate acc (note .item() to do float division)
    acc = (max_indices == Y).sum().item() / n
    return acc
Run Code Online (Sandbox Code Playgroud)

解释pred = mdl(x).max(1)请参阅此https://discuss.pytorch.org/t/how-does-one-get-the-predicted-classification-label-from-a-pytorch-model/91649

最主要的是,您必须减少/折叠分类原始值/logit 具有最大值的维度,然后使用 a 选择它.indices。通常这是维度,1因为 dim 0 具有批量大小,例如[batch_size,D_classification]原始数据可能的大小[batch_size,C,H,W]

一维原始数据的合成示例如下:

import torch
import torch.nn as nn

# data dimension [batch-size, D]
D, Dout = 1, 5
batch_size = 16
x = torch.randn(batch_size, D)
y = torch.randint(low=0,high=Dout,size=(batch_size,))

mdl = nn.Linear(D, Dout)
logits = mdl(x)
print(f'y.size() = {y.size()}')
# removes the 1th dimension with a max, which is the classification layer
# which means it returns the most likely label. Also, note you need to choose .indices since you want to return the
# position of where the most likely label is (not it's raw logit value)
pred = logits.max(1).indices
print(pred)

print('--- preds vs truth ---')
print(f'predictions = {pred}')
print(f'y = {y}')

acc = (pred == y).sum().item() / pred.size(0)
print(acc)
Run Code Online (Sandbox Code Playgroud)

输出:


y.size() = torch.Size([16])
tensor([3, 1, 1, 3, 4, 1, 4, 3, 1, 1, 4, 4, 4, 4, 3, 1])
--- preds vs truth ---
predictions = tensor([3, 1, 1, 3, 4, 1, 4, 3, 1, 1, 4, 4, 4, 4, 3, 1])
y = tensor([3, 3, 3, 0, 3, 4, 0, 1, 1, 2, 1, 4, 4, 2, 0, 0])
0.25
Run Code Online (Sandbox Code Playgroud)

参考: