Jam*_*ker 5 python nlp transformer-model pytorch huggingface-transformers
我最近一直在尝试堆叠语言模型,并注意到一些有趣的事情:BERT 和 XLNet 的输出嵌入与输入嵌入不同。例如,这个代码片段:
bert = transformers.BertForMaskedLM.from_pretrained("bert-base-cased")
tok = transformers.BertTokenizer.from_pretrained("bert-base-cased")
sent = torch.tensor(tok.encode("I went to the store the other day, it was very rewarding."))
enc = bert.get_input_embeddings()(sent)
dec = bert.get_output_embeddings()(enc)
print(tok.decode(dec.softmax(-1).argmax(-1)))
Run Code Online (Sandbox Code Playgroud)
为我输出这个:
,,,,,,,,,,,,,,,,,
Run Code Online (Sandbox Code Playgroud)
我本来期望返回(格式化的)输入序列,因为我的印象是输入和输出令牌嵌入是绑定的。
有趣的是,大多数其他模型没有表现出这种行为。例如,如果您在 GPT2、Albert 或 Roberta 上运行相同的代码片段,它将输出输入序列。
这是一个错误吗?或者是 BERT/XLNet 的预期?
不确定是否为时已晚,但我已经对您的代码进行了一些实验,并且可以将其恢复。:)
bert = transformers.BertForMaskedLM.from_pretrained("bert-base-cased")
tok = transformers.BertTokenizer.from_pretrained("bert-base-cased")
sent = torch.tensor(tok.encode("I went to the store the other day, it was very rewarding."))
print("Initial sentence:", sent)
enc = bert.get_input_embeddings()(sent)
dec = bert.get_output_embeddings()(enc)
print("Decoded sentence:", tok.decode(dec.softmax(0).argmax(1)))
Run Code Online (Sandbox Code Playgroud)
为此,您将得到以下输出:
Initial sentence: tensor([ 101, 146, 1355, 1106, 1103, 2984, 1103, 1168, 1285, 117,
1122, 1108, 1304, 10703, 1158, 119, 102])
Decoded sentence: [CLS] I went to the store the other day, it was very rewarding. [SEP]
Run Code Online (Sandbox Code Playgroud)
| 归档时间: |
|
| 查看次数: |
1619 次 |
| 最近记录: |