ter*_*low 3 neural-network lstm keras rnn seq2seq
我正在尝试使用 Keras 实现一个 seq2seq 编码器-解码器,在编码器上使用双向 lstm,如下所示:
from keras.layers import LSTM,Bidirectional,Input,Concatenate
from keras.models import Model
n_units = 8
n_input = 1
n_output = 1
# encoder
encoder_inputs = Input(shape=(None, n_input))
encoder = Bidirectional(LSTM(n_units, return_state=True))
encoder_outputs, forward_h, forward_c, backward_h, backward_c = encoder(encoder_inputs)
state_h = Concatenate()([forward_h, backward_h])
state_c = Concatenate()([forward_c, backward_c])
encoder_states = [state_h, state_c]
# decoder
decoder_inputs = Input(shape=(None, n_output))
decoder_lstm = LSTM(n_units*2, return_sequences=True, return_state=True)
decoder_outputs, _, _ = decoder_lstm(decoder_inputs, initial_state=encoder_states)
Run Code Online (Sandbox Code Playgroud)
这是我在最后一行遇到的以下错误:
ValueError: Dimensions must be equal, but are 8 and 16 for
'lstm_2_1/MatMul_4' (op: 'MatMul') with input shapes: [?,8], [16,16].
Run Code Online (Sandbox Code Playgroud)
有任何想法吗?
虽然错误指向问题中块的最后一行,但这是由于推理解码器中隐藏单元的数量错误造成的。解决了!
完整的工作代码:
from keras.layers import LSTM,Bidirectional,Input,Concatenate
from keras.models import Model
n_units = 8
n_input = 1
n_output = 1
# encoder
encoder_inputs = Input(shape=(None, n_input))
encoder = Bidirectional(LSTM(n_units, return_state=True))
encoder_outputs, forward_h, forward_c, backward_h, backward_c = encoder(encoder_inputs)
state_h = Concatenate()([forward_h, backward_h])
state_c = Concatenate()([forward_c, backward_c])
encoder_states = [state_h, state_c]
# decoder
decoder_inputs = Input(shape=(None, n_output))
decoder_lstm = LSTM(n_units*2, return_sequences=True, return_state=True)
decoder_outputs, _, _ = decoder_lstm(decoder_inputs, initial_state=encoder_states)
decoder_dense = Dense(n_output, activation='softmax')
decoder_outputs = decoder_dense(decoder_outputs)
model = Model([encoder_inputs, decoder_inputs], decoder_outputs)
# define inference encoder
encoder_model = Model(encoder_inputs, encoder_states)
# define inference decoder
decoder_state_input_h = Input(shape=(n_units*2,))
decoder_state_input_c = Input(shape=(n_units*2,))
decoder_states_inputs = [decoder_state_input_h, decoder_state_input_c]
decoder_outputs, state_h, state_c = decoder_lstm(decoder_inputs, initial_state=decoder_states_inputs)
decoder_states = [state_h, state_c]
decoder_outputs = decoder_dense(decoder_outputs)
decoder_model = Model([decoder_inputs] + decoder_states_inputs, [decoder_outputs] + decoder_states)
Run Code Online (Sandbox Code Playgroud)
| 归档时间: |
|
| 查看次数: |
6574 次 |
| 最近记录: |