rub*_*iks 4 cross-entropy pytorch
我想打印每个时期模型的验证损失,获取和打印验证损失的正确方法是什么?
是不是这样:
criterion = nn.CrossEntropyLoss(reduction='mean')
for x, y in validation_loader:
optimizer.zero_grad()
out = model(x)
loss = criterion(out, y)
loss.backward()
optimizer.step()
losses += loss
display_loss = losses/len(validation_loader)
print(display_loss)
Run Code Online (Sandbox Code Playgroud)
或者像这样
criterion = nn.CrossEntropyLoss(reduction='mean')
for x, y in validation_loader:
optimizer.zero_grad()
out = model(x)
loss = criterion(out, y)
loss.backward()
optimizer.step()
losses += loss
display_loss = losses/len(validation_loader.dataset)
print(display_loss)
Run Code Online (Sandbox Code Playgroud)
或者是其他东西?谢谢。
Sha*_*hai 12
在任何情况下都不应使用验证/测试数据来训练模型(即调用loss.backward()+ )!optimizer.step()
如果您想验证您的模型:
model.eval() # handle drop-out/batch norm layers
loss = 0
with torch.no_grad():
for x,y in validation_loader:
out = model(x) # only forward pass - NO gradients!!
loss += criterion(out, y)
# total loss - divide by number of batches
val_loss = loss / len(validation_loader)
Run Code Online (Sandbox Code Playgroud)
请注意,howoptimizer与在验证集上评估模型无关。您不必根据验证数据更改模型 - 只需验证它。
| 归档时间: |
|
| 查看次数: |
6833 次 |
| 最近记录: |