keras bidirectional lstm seq2seq

JJ *_* D. 7 python lstm keras

我试图修改keras的lstm_seq2seq.py示例,将其修改为双向lstm模型.

https://github.com/keras-team/keras/blob/master/examples/lstm_seq2seq.py

我尝试了不同的方法:

  • 第一个是直接将双向包装器应用于LSTM层:

    encoder_inputs = Input(shape=(None, num_encoder_tokens))
    encoder = Bidirectional(LSTM(latent_dim, return_state=True))
    
    Run Code Online (Sandbox Code Playgroud)

但我收到此错误消息:

---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-76-a80f8554ab09> in <module>()
     75 encoder = Bidirectional(LSTM(latent_dim, return_state=True))
     76 
---> 77 encoder_outputs, state_h, state_c = encoder(encoder_inputs)
     78 # We discard `encoder_outputs` and only keep the states.
     79 encoder_states = [state_h, state_c]

/home/tristanbf/.virtualenvs/pydev3/lib/python3.5/site-packages/keras/engine/topology.py in __call__(self, inputs, **kwargs)
    601 
    602             # Actually call the layer, collecting output(s), mask(s), and shape(s).
--> 603             output = self.call(inputs, **kwargs)
    604             output_mask = self.compute_mask(inputs, previous_mask)
    605 

/home/tristanbf/.virtualenvs/pydev3/lib/python3.5/site-packages/keras/layers/wrappers.py in call(self, inputs, training, mask)
    293             y_rev = K.reverse(y_rev, 1)
    294         if self.merge_mode == 'concat':
--> 295             output = K.concatenate([y, y_rev])
    296         elif self.merge_mode == 'sum':
    297             output = y + y_rev

/home/tristanbf/.virtualenvs/pydev3/lib/python3.5/site-packages/keras/backend/tensorflow_backend.py in concatenate(tensors, axis)
   1757     """
   1758     if axis < 0:
-> 1759         rank = ndim(tensors[0])
   1760         if rank:
   1761             axis %= rank

/home/tristanbf/.virtualenvs/pydev3/lib/python3.5/site-packages/keras/backend/tensorflow_backend.py in ndim(x)
    597     ```
    598     """
--> 599     dims = x.get_shape()._dims
    600     if dims is not None:
    601         return len(dims)

AttributeError: 'list' object has no attribute 'get_shape'
Run Code Online (Sandbox Code Playgroud)
  • 我的第二个猜测是将输入修改为https://github.com/keras-team/keras/blob/master/examples/imdb_bidirectional_lstm.py中的内容:

    encoder_input_data = np.empty(len(input_texts), dtype=object)
    decoder_input_data = np.empty(len(input_texts), dtype=object)
    decoder_target_data = np.empty(len(input_texts), dtype=object)
    
    for i, (input_text, target_text) in enumerate(zip(input_texts, target_texts)):
        encoder_input_data[i] = [input_token_index[char] for char in input_text]
        tseq = [target_token_index[char] for char in target_text]
        decoder_input_data[i] = tseq
        decoder_output_data[i] = tseq[1:]
    
    encoder_input_data = sequence.pad_sequences(encoder_input_data, maxlen=max_encoder_seq_length)
    decoder_input_data = sequence.pad_sequences(decoder_input_data, maxlen=max_decoder_seq_length)
    decoder_target_data = sequence.pad_sequences(decoder_target_data, maxlen=max_decoder_seq_length)
    
    Run Code Online (Sandbox Code Playgroud)

但我得到了同样的错误信息:

---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-75-474b2515be72> in <module>()
     73 encoder = Bidirectional(LSTM(latent_dim, return_state=True))
     74 
---> 75 encoder_outputs, state_h, state_c = encoder(encoder_inputs)
     76 # We discard `encoder_outputs` and only keep the states.
     77 encoder_states = [state_h, state_c]

/home/tristanbf/.virtualenvs/pydev3/lib/python3.5/site-packages/keras/engine/topology.py in __call__(self, inputs, **kwargs)
    601 
    602             # Actually call the layer, collecting output(s), mask(s), and shape(s).
--> 603             output = self.call(inputs, **kwargs)
    604             output_mask = self.compute_mask(inputs, previous_mask)
    605 

/home/tristanbf/.virtualenvs/pydev3/lib/python3.5/site-packages/keras/layers/wrappers.py in call(self, inputs, training, mask)
    293             y_rev = K.reverse(y_rev, 1)
    294         if self.merge_mode == 'concat':
--> 295             output = K.concatenate([y, y_rev])
    296         elif self.merge_mode == 'sum':
    297             output = y + y_rev

/home/tristanbf/.virtualenvs/pydev3/lib/python3.5/site-packages/keras/backend/tensorflow_backend.py in concatenate(tensors, axis)
   1757     """
   1758     if axis < 0:
-> 1759         rank = ndim(tensors[0])
   1760         if rank:
   1761             axis %= rank

/home/tristanbf/.virtualenvs/pydev3/lib/python3.5/site-packages/keras/backend/tensorflow_backend.py in ndim(x)
    597     ```
    598     """
--> 599     dims = x.get_shape()._dims
    600     if dims is not None:
    601         return len(dims)

AttributeError: 'list' object has no attribute 'get_shape'
Run Code Online (Sandbox Code Playgroud)

有帮助吗?谢谢

(代码:https: //gist.github.com/anonymous/c0fd6541ab4fc9c2c1e0b86175fb65c7 )

Yu-*_*ang 12

您看到的错误是因为Bidirectional包装器无法正确处理状态张量.我已经修复了这个PR,它已经在最新的2.1.3版本中了.因此,如果您将Keras升级到最新版本,问题中的行现在应该可以正常工作.

请注意,返回的值Bidirectional(LSTM(..., return_state=True))是一个包含以下内容的列表:

  1. 图层输出
  2. 美国(h, c)前层
  3. 状态(h, c)向后层的

因此,在将它们传递给解码器之前,您可能需要合并状态张量(我猜这通常是单向的).例如,如果您选择连接状态,

encoder_inputs = Input(shape=(None, num_encoder_tokens))
encoder = Bidirectional(LSTM(latent_dim, return_state=True))
encoder_outputs, forward_h, forward_c, backward_h, backward_c = encoder(encoder_inputs)

state_h = Concatenate()([forward_h, backward_h])
state_c = Concatenate()([forward_c, backward_c])
encoder_states = [state_h, state_c]

decoder_inputs = Input(shape=(None, num_decoder_tokens))
decoder_lstm = LSTM(latent_dim * 2, return_sequences=True, return_state=True)
decoder_outputs, _, _ = decoder_lstm(decoder_inputs, initial_state=encoder_states)
Run Code Online (Sandbox Code Playgroud)


MrJ*_*s0n 0

如果问题与数据准备过程有关,那么它在概念上与此类似其中简单列表没有 Numpy 通常添加的形状属性。

此外,您应该将输入提供给 LSTM 编码器,或者简单地将 input_shape 值设置为 LSTM 层。当输入双向层时,请始终在 LSTM 层中使用 return_sequences=True。检查此线程以了解如何正确使用它们,然后查看我在GitHub上为 NLP 项目编写的一些代码行(我也使用了双向层)