Huggingface gpt2语言模型代码中perplexity计算在哪里?

use*_*659 5 machine-learning gpt perplexity huggingface-transformers

我看到一些 github 评论说 model() 调用的损失的输出是困惑的形式:https : //github.com/huggingface/transformers/issues/473

但是当我查看相关代码时... https://huggingface.co/transformers/_modules/transformers/modeling_openai.html#OpenAIGPTLMHeadModel.forward

    if labels is not None:
        # Shift so that tokens < n predict n
        shift_logits = lm_logits[..., :-1, :].contiguous()
        shift_labels = labels[..., 1:].contiguous()
        # Flatten the tokens
        loss_fct = CrossEntropyLoss()
        loss = loss_fct(shift_logits.view(-1, shift_logits.size(-1)), shift_labels.view(-1))
        outputs = (loss,) + outputs

    return outputs  # (loss), lm_logits, (all hidden states), (all attentions)
Run Code Online (Sandbox Code Playgroud)

我看到正在计算交叉熵,但没有转换为困惑。损失最终在哪里转化?或者是否已经存在我不理解的转变?

use*_*659 7

啊好吧,我找到了答案。代码实际上是返回交叉熵。在 github 评论中,他们说这是令人困惑的……他们是这样说的,因为 OP 确实如此

return math.exp(loss)
Run Code Online (Sandbox Code Playgroud)

它将熵转化为困惑:)