小编Em_*_*Em_的帖子

通过对最后 4 层求和来嵌入 BERT 句子

我使用 Chris McCormick 关于 BERT 的教程来pytorch-pretained-bert获得句子嵌入,如下所示:

tokenized_text = tokenizer.tokenize(marked_text)
indexed_tokens = tokenizer.convert_tokens_to_ids(tokenized_text)
segments_ids = [1] * len(tokenized_text)
tokens_tensor = torch.tensor([indexed_tokens])
segments_tensors = torch.tensor([segments_ids])
model = BertModel.from_pretrained('bert-base-uncased')
model.eval()

with torch.no_grad():
    encoded_layers, _ = model(tokens_tensor, segments_tensors)
    # Holds the list of 12 layer embeddings for each token
    # Will have the shape: [# tokens, # layers, # features]
    token_embeddings = []

    # For each token in the sentence...
    for token_i in range(len(tokenized_text)):
        # Holds 12 layers of hidden states for …
Run Code Online (Sandbox Code Playgroud)

python nlp neural-network pytorch

3
推荐指数
1
解决办法
4559
查看次数

标签 统计

neural-network ×1

nlp ×1

python ×1

pytorch ×1