小编glo*_*ian的帖子

有状态 LSTM Tensorflow Invalid Input_h 形状错误

我正在使用 TensorFlow 在时间序列回归问题上试验有状态 LSTM。我很抱歉无法共享数据集。下面是我的代码。

train_feature = train_feature.reshape((train_feature.shape[0], 1, train_feature.shape[1]))
val_feature = val_feature.reshape((val_feature.shape[0], 1, val_feature.shape[1]))

batch_size = 64

model = tf.keras.Sequential()
model.add(tf.keras.layers.LSTM(50, batch_input_shape=(batch_size, train_feature.shape[1], train_feature.shape[2]), stateful=True))
model.add(tf.keras.layers.Dense(1))

model.compile(optimizer='adam',
              loss='mse',
              metrics=[tf.keras.metrics.RootMeanSquaredError()])

model.fit(train_feature, train_label, 
          epochs=10,
          batch_size=batch_size)
Run Code Online (Sandbox Code Playgroud)

当我运行上面的代码时,在第一个 epoch 结束后,我会得到如下错误。

InvalidArgumentError:  [_Derived_]  Invalid input_h shape: [1,64,50] [1,49,50]
     [[{{node CudnnRNN}}]]
     [[sequential_1/lstm_1/StatefulPartitionedCall]] [Op:__inference_train_function_1152847]

Function call stack:
train_function -> train_function -> train_function
Run Code Online (Sandbox Code Playgroud)

但是,如果我将batch_size更改为 1,并将模型训练的代码更改为以下代码,则模型将成功训练。

total_epochs = 10

for i in range(total_epochs):
    model.fit(train_feature, train_label, 
              epochs=1,
              validation_data=(val_feature, val_label),
              batch_size=batch_size,
              shuffle=False)

    model.reset_states()
Run Code Online (Sandbox Code Playgroud)

尽管如此,对于非常大的数据(100 万行),由于batch_size 为1,模型训练将花费很长时间。

所以,我想知道,如何训练批量大小大于 …

neural-network lstm keras tensorflow lstm-stateful

3
推荐指数
1
解决办法
787
查看次数

标签 统计

keras ×1

lstm ×1

lstm-stateful ×1

neural-network ×1

tensorflow ×1