小编KoK*_*oKo的帖子

ValueError:应在至少 2 个输入的列表上调用“Concatenate”层

我正在尝试使用 sigmoid 来连接具有不同嵌入矩阵的两​​个模型的输出。但我不断在连接线上收到错误。我已经尝试了类似问题的其他建议,但它一直给出相同的错误。我觉得我失去了一些东西,但我找不到它。请帮忙解释一下。谢谢

############################            MODEL   1      ######################################
input_tensor=Input(shape=(35,))
input_layer= Embedding(vocab_size, 300, input_length=35, weights=[embedding_matrix],trainable=True)(input_tensor)
conv_blocks = []
filter_sizes = (2,3,4)
for fx in filter_sizes:
    conv_layer= Conv1D(100, kernel_size=fx, activation='relu', data_format='channels_first')(input_layer)   #filters=100, kernel_size=3
    maxpool_layer = MaxPooling1D(pool_size=4)(conv_layer)
    flat_layer= Flatten()(maxpool_layer)
    conv_blocks.append(flat_layer)
conc_layer=concatenate(conv_blocks, axis=1)
graph = Model(inputs=input_tensor, outputs=conc_layer)
model = Sequential()
model.add(graph)
model.add(Dropout(0.2))

############################            MODEL    2     ######################################
input_tensor_1=Input(shape=(35,))
input_layer_1= Embedding(vocab_size, 300, input_length=35, weights=[embedding_matrix_1],trainable=True)(input_tensor_1)
conv_blocks_1 = []
filter_sizes_1 = (2,3,4)
for fx in filter_sizes_1:
    conv_layer_1= Conv1D(100, kernel_size=fx, activation='relu', data_format='channels_first')(input_layer_1)   #filters=100, kernel_size=3
    maxpool_layer_1 = MaxPooling1D(pool_size=4)(conv_layer_1)
    flat_layer_1= Flatten()(maxpool_layer_1)
    conv_blocks_1.append(flat_layer_1) …
Run Code Online (Sandbox Code Playgroud)

python python-3.x conv-neural-network keras tf.keras

5
推荐指数
1
解决办法
9887
查看次数

运行时错误:CUDA 内存不足。尝试分配 2.86 GiB(GPU 0;10.92 GiB 总容量;...PyTorch 总共保留了 9.06 GiB)

这是什么意思9.06 GiB reserved in total by PyTorch

如果我7.80 GiB total capacity对同一个脚本使用较小尺寸的 GPU ,它6.20 GiB reserved in total by PyTorch 会显示 Pytorch 中的预留如何工作以及为什么预留内存会根据 GPU 尺寸而变化?

为了解决错误消息,RuntimeError: CUDA out of memory. Tried to allocate 2.86 GiB (GPU 0; 10.92 GiB total capacity; 9.02 GiB already allocated; 1.29 GiB free; 9.06 GiB reserved in total by PyTorch)我尝试将批量大小从 10 减少到 5 到 3。我尝试使用del x_train1. 我也试过使用torch.cuda.empty_cache(). with torch.no_grad()在应用预x_train1 = bert_model(train_indices)[2]训练模型以及训练和验证新模型时,我也使用过。但它们都不起作用。

这是跟踪: …

gpu nvidia pytorch

5
推荐指数
0
解决办法
2779
查看次数

解码从 SentenceTransformer 派生的句子表示

是否可以将从 SentenceTransformer 派生的句子表示解码回句子?

请参阅文档中的示例

from sentence_transformers import SentenceTransformer
model = SentenceTransformer('paraphrase-MiniLM-L6-v2')

#Our sentences we like to encode
sentences = ['This framework generates embeddings for each input sentence',
    'Sentences are passed as a list of string.',
    'The quick brown fox jumps over the lazy dog.']

#Sentences are encoded by calling model.encode()
embeddings = model.encode(sentences)
Run Code Online (Sandbox Code Playgroud)

我想解码表示以获得句子

embedding = [[-1.76214352e-01  1.20600984e-01 -2.93624014e-01 -2.29858071e-01
  -8.22928399e-02  2.37709314e-01  ... 3.39985073e-0]]
sentence = model.decode(embedding)
print(sentence)
Run Code Online (Sandbox Code Playgroud)
'This framework generates embeddings for each input sentence'
Run Code Online (Sandbox Code Playgroud)

python text bert-language-model sentence-transformers

5
推荐指数
0
解决办法
1884
查看次数