Pen*_*uin 8 nlp transformer-model pytorch bert-language-model huggingface-transformers
我有几个屏蔽语言模型(主要是 Bert、Roberta、Albert、Electra)。我还有一个句子数据集。我怎样才能得到每个句子的困惑度?
\n从这里的Huggingface 文档中,他们提到困惑度“对于像 BERT 这样的屏蔽语言模型来说没有很好的定义”,尽管我仍然看到人们以某种方式计算它。
\n例如,在这个SO问题中,他们使用函数计算它
\ndef score(model, tokenizer, sentence, mask_token_id=103):\n tensor_input = tokenizer.encode(sentence, return_tensors=\'pt\')\n repeat_input = tensor_input.repeat(tensor_input.size(-1)-2, 1)\n mask = torch.ones(tensor_input.size(-1) - 1).diag(1)[:-2]\n masked_input = repeat_input.masked_fill(mask == 1, 103)\n labels = repeat_input.masked_fill( masked_input != 103, -100)\n loss,_ = model(masked_input, masked_lm_labels=labels)\n result = np.exp(loss.item())\n return result\n\nscore(model, tokenizer, \'\xe6\x88\x91\xe7\x88\xb1\xe4\xbd\xa0\') # returns 45.63794545581973\n
Run Code Online (Sandbox Code Playgroud)\n但是,当我尝试使用我得到的代码时TypeError: forward() got an unexpected keyword argument \'masked_lm_labels\'
。
我用我的几个模型尝试过:
\nfrom transformers import pipeline, BertForMaskedLM, BertForMaskedLM, AutoTokenizer, RobertaForMaskedLM, AlbertForMaskedLM, ElectraForMaskedLM\nimport torch\n\n1)\ntokenizer = AutoTokenizer.from_pretrained("bioformers/bioformer-cased-v1.0")\nmodel = BertForMaskedLM.from_pretrained("bioformers/bioformer-cased-v1.0")\n2)\ntokenizer = AutoTokenizer.from_pretrained("sultan/BioM-ELECTRA-Large-Generator")\nmodel = ElectraForMaskedLM.from_pretrained("sultan/BioM-ELECTRA-Large-Generator")\n
Run Code Online (Sandbox Code Playgroud)\n这个问题也使用了masked_lm_labels
作为输入,它似乎以某种方式起作用。
Dav*_*ale 19
有一篇论文《Masked Language Model Scoring》探讨了 Masked Language Model 中的伪困惑度,并表明伪困惑度虽然在理论上没有得到很好的证明,但在比较文本的“自然度”方面仍然表现良好。
至于代码,您的代码片段完全正确,但有一个细节:在最近的 Huggingface BERT 实现中,masked_lm_labels
被重命名为 simple labels
,以使各种模型的接口更加兼容。我还用通用103
的tokenizer.mask_token_id
. 所以下面的代码片段应该可以工作:
from transformers import AutoModelForMaskedLM, AutoTokenizer
import torch
import numpy as np
model_name = 'cointegrated/rubert-tiny'
model = AutoModelForMaskedLM.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
def score(model, tokenizer, sentence):
tensor_input = tokenizer.encode(sentence, return_tensors='pt')
repeat_input = tensor_input.repeat(tensor_input.size(-1)-2, 1)
mask = torch.ones(tensor_input.size(-1) - 1).diag(1)[:-2]
masked_input = repeat_input.masked_fill(mask == 1, tokenizer.mask_token_id)
labels = repeat_input.masked_fill( masked_input != tokenizer.mask_token_id, -100)
with torch.inference_mode():
loss = model(masked_input, labels=labels).loss
return np.exp(loss.item())
print(score(sentence='London is the capital of Great Britain.', model=model, tokenizer=tokenizer))
# 4.541251105675365
print(score(sentence='London is the capital of South America.', model=model, tokenizer=tokenizer))
# 6.162017238332462
Run Code Online (Sandbox Code Playgroud)
您可以通过运行此 gist在 Google Colab 中尝试此代码。
归档时间: |
|
查看次数: |
9023 次 |
最近记录: |