如何将AttentionMechanism与MultiRNNCell和dynamic_decode一起使用?

Ryl*_*fer 2 tensorflow recurrent-neural-network sequence-to-sequence

我想创建一个使用注意机制的多层动态RNN解码器.为此,我首先创建一个注意机制:

attention_mechanism = BahdanauAttention(num_units=ATTENTION_UNITS,
                                        memory=encoder_outputs,
                                        normalize=True)
Run Code Online (Sandbox Code Playgroud)

然后我用AttentionWrapper注意机制包装一个LSTM单元格:

attention_wrapper = AttentionWrapper(cell=self._create_lstm_cell(DECODER_SIZE),
                                             attention_mechanism=attention_mechanism,
                                             output_attention=False,
                                             alignment_history=True,
                                             attention_layer_size=ATTENTION_LAYER_SIZE)
Run Code Online (Sandbox Code Playgroud)

其中self._create_lstm_cell定义如下:

@staticmethod
def _create_lstm_cell(cell_size):
    return BasicLSTMCell(cell_size)
Run Code Online (Sandbox Code Playgroud)

然后我做一些簿记(例如创建我的MultiRNNCell,创建初始状态,创建一个TrainingHelper等)

        attention_zero = attention_wrapper.zero_state(batch_size=tf.flags.FLAGS.batch_size, dtype=tf.float32)

        # define initial state
        initial_state = attention_zero.clone(cell_state=encoder_final_states[0])

        training_helper = TrainingHelper(inputs=self.y,  # feed in ground truth
                                         sequence_length=self.y_lengths)  # feed in sequence lengths

        layered_cell = MultiRNNCell(
            [attention_wrapper] + [ResidualWrapper(self._create_lstm_cell(cell_size=DECODER_SIZE))
                                   for _ in range(NUMBER_OF_DECODER_LAYERS - 1)])

        decoder = BasicDecoder(cell=layered_cell,
                               helper=training_helper,
                               initial_state=initial_state)

        decoder_outputs, decoder_final_state, decoder_final_sequence_lengths = dynamic_decode(decoder=decoder,
                                                                                              maximum_iterations=tf.flags.FLAGS.max_number_of_scans // 12,
                                                                                              impute_finished=True)
Run Code Online (Sandbox Code Playgroud)

但是我收到以下错误:AttributeError: 'LSTMStateTuple' object has no attribute 'attention'.

将注意机制添加到MultiRNNCell动态解码器的正确方法是什么?

Lem*_*mon 6

您是否尝试过使用tf.contrib提供的注意包装器

以下是使用注意包装和丢失的示例:

cells = []
for i in range(n_layers):                   
    cell = tf.contrib.rnn.LSTMCell(n_hidden, state_is_tuple=True)

    cell = tf.contrib.rnn.AttentionCellWrapper(
        cell, attn_length=40, state_is_tuple=True)

    cell = tf.contrib.rnn.DropoutWrapper(cell,output_keep_prob=0.5)
    cells.append(cell)

cell = tf.contrib.rnn.MultiRNNCell(cells, state_is_tuple=True)
init_state = cell.zero_state(batch_size, tf.float32)
Run Code Online (Sandbox Code Playgroud)