我在尝试使用 CuDNNLSTM 而不是 keras.layers.LSTM 时遇到了一个问题。
这是我得到的错误:
无法使用模型配置调用 ThenRnnForward: [rnn_mode, rnn_input_mode, rnn_direction_mode]: 2, 0, 0 , [num_layers, input_size, num_units, dir_count, seq_length, batch_size]: [1, 300, 512, 5528] 1, [1, 1, 1, [{{node bidirectional_1/CudnnRNN_1}} = CudnnRNN[T=DT_FLOAT, _class=["loc:@train...NNBackprop"], direction="unidirectional", dropout=0, input_mode="linear_input", is_training=true , rnn_mode="lstm", seed=87654321, seed2=0, _device="/job:localhost/replica:0/task:0/device:GPU:0"](bidirectional_1/transpose_1, bidirectional_1/ExpandDims_1, bidirectional_1/ExpandDims_1 , bidirectional_1/concat_1)]] [[{{node loss/mul/_75}} = _Recvclient_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device=" /job:localhost/replica:0/task:0/device:GPU:0",send_device_incarnation=1, tensor_name="edge_1209_loss/mul", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"]]
另外,我在其中一次运行中遇到了这个错误:
内部错误:GPU 同步失败
每次运行后内核都会死掉。
当我尝试使用 CuDNNLSTM 在谷歌云上的 VM 实例上运行它时,我才开始收到此错误。
我的代码是:
MAX_LEN = max(len(article) for article in X_train_tokens)
EMBEDDING_DIM=300
vocab_size = len(word_to_id)
classes = …Run Code Online (Sandbox Code Playgroud)