Sam*_* He 5 huggingface-transformers
我在情感分析管道中使用默认模型没有任何问题。
# Allocate a pipeline for sentiment-analysis
nlp = pipeline('sentiment-analysis')
nlp('I am a black man.')
>>>[{'label': 'NEGATIVE', 'score': 0.5723695158958435}]
Run Code Online (Sandbox Code Playgroud)
但是,当我尝试通过添加特定模型来稍微自定义管道时。它抛出一个KeyError。
nlp = pipeline('sentiment-analysis',
tokenizer = AutoTokenizer.from_pretrained("DeepPavlov/bert-base-cased-conversational"),
model = AutoModelWithLMHead.from_pretrained("DeepPavlov/bert-base-cased-conversational"))
nlp('I am a black man.')
>>>---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-55-af7e46d6c6c9> in <module>
3 tokenizer = AutoTokenizer.from_pretrained("DeepPavlov/bert-base-cased-conversational"),
4 model = AutoModelWithLMHead.from_pretrained("DeepPavlov/bert-base-cased-conversational"))
----> 5 nlp('I am a black man.')
6
7
~/opt/anaconda3/lib/python3.7/site-packages/transformers/pipelines.py in __call__(self, *args, **kwargs)
721 outputs = super().__call__(*args, **kwargs)
722 scores = np.exp(outputs) / np.exp(outputs).sum(-1, keepdims=True)
--> 723 return [{"label": self.model.config.id2label[item.argmax()], "score": item.max().item()} for item in scores]
724
725
~/opt/anaconda3/lib/python3.7/site-packages/transformers/pipelines.py in <listcomp>(.0)
721 outputs = super().__call__(*args, **kwargs)
722 scores = np.exp(outputs) / np.exp(outputs).sum(-1, keepdims=True)
--> 723 return [{"label": self.model.config.id2label[item.argmax()], "score": item.max().item()} for item in scores]
724
725
KeyError: 58129
Run Code Online (Sandbox Code Playgroud)
我面临着同样的问题。我正在使用 XML-R 中的一个模型,该模型使用 teamv2 数据集(“a-ware/xlmroberta-squadv2”)进行微调。在我的例子中,KeyError 是 16。
关联
在寻找有关该问题的帮助时,我发现了以下信息:链接希望您发现它有帮助。
答案(来自链接)
当模型预测不属于文档一部分的标记(例如最终特殊标记 [SEP])时,管道会引发异常
我的问题:
from transformers import XLMRobertaTokenizer, XLMRobertaForQuestionAnswering
from transformers import pipeline
nlp = pipeline('question-answering',
model = XLMRobertaForQuestionAnswering.from_pretrained('a-ware/xlmroberta-squadv2'),
tokenizer= XLMRobertaTokenizer.from_pretrained('a-ware/xlmroberta-squadv2'))
Run Code Online (Sandbox Code Playgroud)
nlp(question = "Who was Jim Henson?", context ="Jim Henson was a nice puppet")
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-15-b5a8ece5e525> in <module>()
1 context = "Jim Henson was a nice puppet"
2 # --------------- CON INTERROGACIONES
----> 3 nlp(question = "Who was Jim Henson?", context =context)
1 frames
/usr/local/lib/python3.6/dist-packages/transformers/pipelines.py in <listcomp>(.0)
1745 ),
1746 }
-> 1747 for s, e, score in zip(starts, ends, scores)
1748 ]
1749
KeyError: 16
Run Code Online (Sandbox Code Playgroud)
解决方案 1:在上下文末尾添加标点符号
为了避免尝试提取最终标记(可能是特殊的 [SEP])的错误,我在上下文末尾添加了一个元素(在本例中为标点符号):
nlp(question = "Who was Jim Henson?", context ="Jim Henson was a nice puppet.")
[OUT]
{'answer': 'nice puppet.', 'end': 28, 'score': 0.5742837190628052, 'start': 17}
Run Code Online (Sandbox Code Playgroud)
解决方案 2:不要使用 pipeline()
原始模型可以自行处理以检索正确的令牌索引。
from transformers import XLMRobertaTokenizer, XLMRobertaForQuestionAnswering
import torch
tokenizer = XLMRobertaTokenizer.from_pretrained('a-ware/xlmroberta-squadv2')
model = XLMRobertaForQuestionAnswering.from_pretrained('a-ware/xlmroberta-squadv2')
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
encoding = tokenizer(question, text, return_tensors='pt')
input_ids = encoding['input_ids']
attention_mask = encoding['attention_mask']
start_scores, end_scores = model(input_ids, attention_mask=attention_mask, output_attentions=False)[:2]
all_tokens = tokenizer.convert_ids_to_tokens(input_ids[0])
answer = ' '.join(all_tokens[torch.argmax(start_scores) : torch.argmax(end_scores)+1])
answer = tokenizer.convert_tokens_to_ids(answer.split())
answer = tokenizer.decode(answer)
Run Code Online (Sandbox Code Playgroud)
更新
更详细地查看您的案例,我发现管道中会话任务的默认模型是distilbert-base-cased(源代码)。
我发布的第一个解决方案确实不是一个好的解决方案。尝试其他问题我得到了同样的错误。然而,管道之外的模型本身工作得很好(正如我在解决方案 2 中所示)。因此,我相信并非所有模型都可以引入管道中。如果有人有更多相关信息,请帮助我们。谢谢。
| 归档时间: |
|
| 查看次数: |
3932 次 |
| 最近记录: |