小编Adr*_*ian的帖子

在语言建模中,为什么我必须在每个新的训练时期之前 init_hidden 权重?(pytorch)

我对pytorch语言建模中的以下代码有疑问:

print("Training and generating...")
    for epoch in range(1, config.num_epochs + 1): 
        total_loss = 0.0
        model.train()  
        hidden = model.init_hidden(config.batch_size)  

        for ibatch, i in enumerate(range(0, train_len - 1, seq_len)):
            data, targets = get_batch(train_data, i, seq_len)          
            hidden = repackage_hidden(hidden)
            model.zero_grad()

            output, hidden = model(data, hidden)
            loss = criterion(output.view(-1, config.vocab_size), targets)
            loss.backward()  
Run Code Online (Sandbox Code Playgroud)

请检查第 5 行。

init_hidden 函数如下:

def init_hidden(self, bsz):
    weight = next(self.parameters()).data
    if self.rnn_type == 'LSTM':  # lstm?(h0, c0)
        return (Variable(weight.new(self.n_layers, bsz, self.hi_dim).zero_()),
                Variable(weight.new(self.n_layers, bsz, self.hi_dim).zero_()))
    else:  # gru & rnn?h0
        return Variable(weight.new(self.n_layers, …
Run Code Online (Sandbox Code Playgroud)

nlp machine-learning recurrent-neural-network pytorch

7
推荐指数
2
解决办法
1947
查看次数