我正在使用 GPU 在 Databricks 上运行以下 LSTM 代码
model = Sequential()
model.add(LSTM(64, activation=LeakyReLU(alpha=0.05), batch_input_shape=(1, timesteps, n_features),
stateful=False, return_sequences = True))
model.add(Dropout(0.2))
model.add(LSTM(32))
model.add(Dropout(0.2))
model.add(Dense(n_features))
model.compile(loss='mean_squared_error', optimizer=Adam(learning_rate = 0.001), metrics='acc')
model.fit(generator, epochs=epochs, verbose=0, shuffle=False)
Run Code Online (Sandbox Code Playgroud)
但不断出现以下警告
WARNING:tensorflow:Layer lstm will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
Run Code Online (Sandbox Code Playgroud)
它的训练速度比没有 GPU 时慢得多。我正在使用 DBR 9.0 ML(包括 Apache Spark 3.1.2、GPU、Scala 2.12),我是否需要任何其他库?