我正在尝试使用 Keras 实现回归序列模型,但得到了非常奇怪的结果。我的代码如下。我使用 tf.data 数据集作为输入数据集。我的损失最初下降,但随后开始波动。发生这种情况是有原因的吗?
#initialize model
model = keras.models.Sequential()
model.add(Dense(1+(len(feature_names)-1), input_shape=((len(feature_names)-1),)))
model.add(LeakyReLU())
model.add(Dropout(0.5))
model.add(Dense(10))
model.add(LeakyReLU())
#output layer, 1 unit
model.add(Dense(1))
loss = 'mean_squared_error'
optimizer = tf.keras.optimizers.Nadam(learning_rate=0.0001)
#compile the model
m = tf.keras.metrics.RootMeanSquaredError()
model.compile(optimizer=optimizer, loss=loss, metrics=[m,'mae'])
# early_stop = keras.callbacks.EarlyStopping(monitor='val_loss', patience=4)
tb = TensorBoard(histogram_freq=1)
##try learning rates, different noise, dropout, using only nasadem
#run the model
history = model.fit(x=final_train_dataset,epochs=count,steps_per_epoch=steps_per_epoch,validation_data=final_valid_dataset,callbacks=[tb])
Run Code Online (Sandbox Code Playgroud)
5468/5468 [==============================] - 19s 4ms/step - loss: 19.8461 - root_mean_squared_error: 4.4549 - mae: 3.1814 - val_loss: 13.2963 - val_root_mean_squared_error: 3.6464 - …Run Code Online (Sandbox Code Playgroud)