Pytorch Lightning 中forward 和train_step 的区别?

Gra*_*ce 3 python resnet pytorch pytorch-lightning

我在 Pytorch Lightning 中设置了一个迁移学习 Resnet。该结构借自此 wandb 教程 https://wandb.ai/wandb/wandb-lightning/reports/Image-Classification-using-PyTorch-Lightning--VmlldzoyODk1NzY

并查看文档https://pytorch-lightning.readthedocs.io/en/latest/common/lightning_module.html

我对 defforward () 和 def Training_step() 方法之间的区别感到困惑。

最初在 PL 文档中,模型不会在训练步骤中调用,仅在前向调用中调用。但在训练步骤中也不会调用前向。我一直在数据上运行模型,输出看起来很合理(我有一个图像回调,我可以看到模型正在学习,并最终获得了良好的准确性结果)。但我担心,鉴于没有调用前向方法,该模型在某种程度上没有被实现?

型号代码为:

class TransferLearning(pl.LightningModule):
    "Works for Resnet at the moment"
    def __init__(self, model, learning_rate, optimiser = 'Adam', weights = [ 1/2288  , 1/1500], av_type = 'macro' ):
        super().__init__()
        self.class_weights = torch.FloatTensor(weights)
        self.optimiser = optimiser
        self.thresh  =  0.5
        self.save_hyperparameters()
        self.learning_rate = learning_rate
        
        #add metrics for tracking 
        self.accuracy = Accuracy()
        self.loss= nn.CrossEntropyLoss()
        self.recall = Recall(num_classes=2, threshold=self.thresh, average = av_type)
        self.prec = Precision( num_classes=2, average = av_type )
        self.jacq_ind = JaccardIndex(num_classes=2)
        

        # init model
        backbone = model
        num_filters = backbone.fc.in_features
        layers = list(backbone.children())[:-1]
        self.feature_extractor = nn.Sequential(*layers)

        # use the pretrained model to classify damage 2 classes
        num_target_classes = 2
        self.classifier = nn.Linear(num_filters, num_target_classes)

    def forward(self, x):
        self.feature_extractor.eval()
        with torch.no_grad():
            representations = self.feature_extractor(x).flatten(1)
        x = self.classifier(representations)
        return x
    
    def training_step(self, batch, batch_idx):
        x, y = batch
        logits = self(x)
        loss = self.loss(logits, y)
        
        # training metrics
        preds = torch.argmax(logits, dim=1)
        acc = self.accuracy(preds, y)
        recall = self.recall(preds, y)
        precision = self.prec(preds, y)
        jac = self.jacq_ind(preds, y)

        self.log('train_loss', loss, on_step=True, on_epoch=True, logger=True)
        self.log('train_acc', acc, on_step=True, on_epoch=True, logger=True)
        self.log('train_recall', recall, on_step=True, on_epoch=True, logger=True)
        self.log('train_precision', precision, on_step=True, on_epoch=True, logger=True)
        self.log('train_jacc', jac, on_step=True, on_epoch=True, logger=True)
        return loss
  
    def validation_step(self, batch, batch_idx):
        x, y = batch
        logits = self(x)
        loss = self.loss(logits, y)

        # validation metrics
        preds = torch.argmax(logits, dim=1)
        acc = self.accuracy(preds, y)
        recall = self.recall(preds, y)
        precision = self.prec(preds, y)
        jac = self.jacq_ind(preds, y)


        self.log('val_loss', loss, prog_bar=True)
        self.log('val_acc', acc, prog_bar=True)
        self.log('val_recall', recall, prog_bar=True)
        self.log('val_precision', precision, prog_bar=True)
        self.log('val_jacc', jac, prog_bar=True)

        return loss

    def test_step(self, batch, batch_idx):
        x, y = batch
        logits = self(x)
        loss = self.loss(logits, y)
        
        # validation metrics
        preds = torch.argmax(logits, dim=1)
        acc = self.accuracy(preds, y)
        recall = self.recall(preds, y)
        precision = self.prec(preds, y)
        jac = self.jacq_ind(preds, y)


        self.log('test_loss', loss, prog_bar=True)
        self.log('test_acc', acc, prog_bar=True)
        self.log('test_recall', recall, prog_bar=True)
        self.log('test_precision', precision, prog_bar=True)
        self.log('test_jacc', jac, prog_bar=True)


        return loss
    
    def configure_optimizers(self,):
        print('Optimise with {}'.format(self.optimiser) )
        # optimizer = self.optimiser_dict[self.optimiser](self.parameters(), lr=self.learning_rate)
                
                # Support Adam, SGD, RMSPRop and Adagrad as optimizers.
        if self.optimiser == "Adam":
            optimiser = optim.AdamW(self.parameters(), lr = self.learning_rate)
        elif self.optimiser == "SGD":
            optimiser = optim.SGD(self.parameters(), lr = self.learning_rate)
        elif self.optimiser == "Adagrad":
            optimiser = optim.Adagrad(self.parameters(), lr = self.learning_rate)
        elif self.optimiser == "RMSProp":
            optimiser = optim.RMSprop(self.parameters(), lr = self.learning_rate)
        else:
            assert False, f"Unknown optimizer: \"{self.optimiser}\""

        return optimiser
Run Code Online (Sandbox Code Playgroud)

Mik*_*e B 8

我对 defforward () 和 def Training_step() 方法之间的区别感到困惑。

引用文档

“在闪电网络中,我们建议将训练与推理分开。training_step 定义了完整的训练循环。我们鼓励用户使用前向来定义推理动作。”

因此forward()定义了你的预测/推理动作。它甚至不需要成为training_step您定义整个训练循环的一部分。training_step尽管如此,如果您愿意的话,您可以选择将其放在您的手中。forward()不属于 的示例training_step

    def forward(self, x):
        # in lightning, forward defines the prediction/inference actions
        embedding = self.encoder(x)
        return embedding

    def training_step(self, batch, batch_idx):
        # training_step defined the train loop.
        # in this case it is independent of forward
        x, y = batch
        x = x.view(x.size(0), -1)
        z = self.encoder(x)
        x_hat = self.decoder(z)
        loss = F.mse_loss(x_hat, x)
        # Logging to TensorBoard by default
        self.log("train_loss", loss)
        return loss
Run Code Online (Sandbox Code Playgroud)

该模型不会在训练步骤中调用,仅在前向调用中调用。但是在训练步骤中也没有调用forward

事实上,forward()它没有被你调用,train_step是因为self(x)它是为你而调用的。您也可以forward()显式调用而不是使用call(x).

我担心鉴于没有调用前向方法,该模型在某种程度上没有被实现?

只要您看到记录的指标朝着self.log正确的方向移动,您就会知道您的模型被正确调用及其学习。