我目前正在使用这个代码,我从github 上的一个讨论得到这里是注意机制的代码:
_input = Input(shape=[max_length], dtype='int32')
# get the embedding layer
embedded = Embedding(
input_dim=vocab_size,
output_dim=embedding_size,
input_length=max_length,
trainable=False,
mask_zero=False
)(_input)
activations = LSTM(units, return_sequences=True)(embedded)
# compute importance for each step
attention = Dense(1, activation='tanh')(activations)
attention = Flatten()(attention)
attention = Activation('softmax')(attention)
attention = RepeatVector(units)(attention)
attention = Permute([2, 1])(attention)
sent_representation = merge([activations, attention], mode='mul')
sent_representation = Lambda(lambda xin: K.sum(xin, axis=-2), output_shape=(units,))(sent_representation)
probabilities = Dense(3, activation='softmax')(sent_representation)
Run Code Online (Sandbox Code Playgroud)
这是正确的方法吗?我有点期待时间分布层的存在,因为注意机制分布在RNN的每个时间步骤中.我需要有人确认这个实现(代码)是一个正确的注意机制实现.谢谢.
Phi*_*emy 16
如果你想在时间维度上引起注意,那么这部分代码对我来说似乎是正确的:
activations = LSTM(units, return_sequences=True)(embedded)
# compute importance for each step
attention = Dense(1, activation='tanh')(activations)
attention = Flatten()(attention)
attention = Activation('softmax')(attention)
attention = RepeatVector(units)(attention)
attention = Permute([2, 1])(attention)
sent_representation = merge([activations, attention], mode='mul')
Run Code Online (Sandbox Code Playgroud)
你已经找到了形状的注意力矢量(batch_size, max_length):
attention = Activation('softmax')(attention)
Run Code Online (Sandbox Code Playgroud)
我之前从未见过这个代码,所以我不能说这个代码是否真的是正确的:
K.sum(xin, axis=-2)
Run Code Online (Sandbox Code Playgroud)
进一步阅读(你可以看看):