我尝试实现提前停止功能以避免我的神经网络模型过度拟合。我很确定逻辑是正确的,但由于某种原因,它不起作用。我希望当验证损失大于某些时期的训练损失时,早期停止函数返回 True。但它始终返回 False,即使验证损失变得比训练损失大得多。请问您能看出问题出在哪里吗?
def early_stopping(train_loss, validation_loss, min_delta, tolerance):
counter = 0
if (validation_loss - train_loss) > min_delta:
counter +=1
if counter >= tolerance:
return True
Run Code Online (Sandbox Code Playgroud)
for i in range(epochs):
print(f"Epoch {i+1}")
epoch_train_loss, pred = train_one_epoch(model, train_dataloader, loss_func, optimiser, device)
train_loss.append(epoch_train_loss)
# validation
with torch.no_grad():
epoch_validate_loss = validate_one_epoch(model, validate_dataloader, loss_func, device)
validation_loss.append(epoch_validate_loss)
# early stopping
if early_stopping(epoch_train_loss, epoch_validate_loss, min_delta=10, tolerance = 20):
print("We are at epoch:", i)
break
Run Code Online (Sandbox Code Playgroud)
编辑2:
def train_validate (model, train_dataloader, validate_dataloader, loss_func, optimiser, device, epochs): …
Run Code Online (Sandbox Code Playgroud)