MultiRNN 和 static_rnn 错误:维度必须相等,但分别为 256 和 129

Rob*_*bin 4 python deep-learning lstm tensorflow recurrent-neural-network

我想构建一个 3 层的 LSTM 网络。这是代码:

num_layers=3
time_steps=10
num_units=128
n_input=1
learning_rate=0.001
n_classes=1
...

x=tf.placeholder("float",[None,time_steps,n_input],name="x")
y=tf.placeholder("float",[None,n_classes],name="y")
input=tf.unstack(x,time_steps,1)

lstm_layer=rnn_cell.BasicLSTMCell(num_units,state_is_tuple=True)
network=rnn_cell.MultiRNNCell([lstm_layer for _ in range(num_layers)],state_is_tuple=True)

outputs,_=rnn.static_rnn(network,inputs=input,dtype="float")
Run Code Online (Sandbox Code Playgroud)

num_layers=1 工作得很好,但是对于不止一层,我在这一行收到错误:

outputs,_=rnn.static_rnn(network,inputs=input,dtype="float")
Run Code Online (Sandbox Code Playgroud)

ValueError:尺寸必须相等,但“rnn/rnn/multi_rnn_cell/cell_0/cell_0/basic_lstm_cell/MatMul_1”(操作:“MatMul”)的尺寸为 256 和 129,输入形状为:[?,256]、[129,512]。

谁能解释一下 129 和 512 的值是从哪里来的?

Max*_*xim 5

您不应该为第一层和更深的层重复使用相同的单元,因为它们的输入不同,因此内核矩阵不同。尝试这个:

# Extra function is for readability. No problem to inline it.
def make_cell(lstm_size):
  return tf.nn.rnn_cell.BasicLSTMCell(lstm_size, state_is_tuple=True)

network = rnn_cell.MultiRNNCell([make_cell(num_units) for _ in range(num_layers)], 
                                state_is_tuple=True)
Run Code Online (Sandbox Code Playgroud)