sca*_*der 3 python nlp huggingface-transformers generative-pretrained-transformer
我有以下代码:
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("t5-small")
model = T5ForConditionalGeneration.from_pretrained("t5-small")
input_ids = tokenizer("The <extra_id_0> walks in <extra_id_1> park", return_tensors="pt").input_ids
sequence_ids = model.generate(input_ids)
sequences = tokenizer.batch_decode(sequence_ids)
sequences
Run Code Online (Sandbox Code Playgroud)
目前它生产这个:
['<pad><extra_id_0> park offers<extra_id_1> the<extra_id_2> park.</s>']
Run Code Online (Sandbox Code Playgroud)
有没有办法阻止生成器生成某些单词(例如stopwords = ["park", "offer"])?
查看文档后发现有一个bad_words_ids参数可以传入generate()
给定一个坏词列表,您可以使用以下命令创建 id 列表
tokenizer(bad_words, add_special_tokens=False).input_ids
Run Code Online (Sandbox Code Playgroud)
input_ids = tokenizer("The <extra_id_0> walks in <extra_id_1> park", return_tensors="pt").input_ids
bad_words = ["park", "offers"]
bad_words_ids = tokenizer(bad_words, add_special_tokens=False).input_ids
#[[2447], [704]]
sequence_ids = model.generate(input_ids, bad_words_ids=bad_words_ids)
#tensor([[ 0, 32099, 1061, 19, 3, 9, 710, 1482, 550, 45, 32098, 8, 32097, 1061, 5, 1]])
sequences = tokenizer.batch_decode(sequence_ids)
print(sequences)
#['<pad><extra_id_0> Park is a short walk away from<extra_id_1> the<extra_id_2> Park.</s>']
Run Code Online (Sandbox Code Playgroud)
请注意“公园”一词现在是如何出现的。这是因为标记生成器将Park (id 2447 ) 和Park (id 1061 ) 识别为 2 个不同的标记。这可能取决于您使用的分词器(有不区分大小写的分词器)。如果您不希望这种情况发生,您也可以将 Park 添加到坏词列表中。