小编kin*_*uba的帖子

嵌入层输出 nan

我正在尝试学习 seq2seq 模型。嵌入层位于编码器中,有时在一些迭代后输出 nan 值。我无法确定原因。我该如何解决这个问题?问题出在下面代码中的forward函数中的第一个emb_layer。


class TransformerEncoder(nn.Module):
    def __init__(self, vocab_size, hidden_size=1024, num_layers=6, dropout=0.2, input_pad=1, batch_first=False, embedder=None, init_weight=0.1):
        super(TransformerEncoder, self).__init__()
        self.input_pad = input_pad
        self.vocab_size = vocab_size
        self.num_layers = num_layers
        self.embedder = embedder

        if embedder is not None:
            self.emb_layer = embedder
        else:
            self.emb_layer = nn.Embedding(vocab_size, hidden_size, padding_idx=1)

        self.positional_encoder = PositionalEncoder()
        self.transformer_layers = nn.ModuleList()
        for _ in range(num_layers):
            self.transformer_layers.append(
                    TransformerEncoderBlock(num_heads=8, embedding_dim=1024, dropout=dropout))

    def set_mask(self, inputs):
        self.input_mask = (inputs == self.input_pad).unsqueeze(1)

    def forward(self, inputs):
        x = self.emb_layer(inputs)
        x = self.positional_encoder(x)
Run Code Online (Sandbox Code Playgroud)

transformer-model deep-learning pytorch seq2seq

5
推荐指数
1
解决办法
3602
查看次数