Spo*_*ook 1 python machine-learning loss neural-network keras
我在Keras中有小型神经网络。
contextTrain, contextTest, utteranceTrain, utteranceTest = train_test_split(context, utterance, test_size=0.1, random_state=1)
model = Sequential()
model.add(LSTM(input_shape=contextTrain.shape[1:], return_sequences=True, units=300, activation="sigmoid", kernel_initializer="glorot_normal", recurrent_initializer="glorot_normal"))
model.add(LSTM(return_sequences=True, units=300, activation="sigmoid", kernel_initializer="glorot_normal", recurrent_initializer="glorot_normal"))
model.compile(loss="cosine_proximity", optimizer="adam", metrics=["accuracy"])
model.fit(contextTrain, utteranceTrain, epochs=5000, validation_data=(contextTest, utteranceTest), callbacks=[ModelCheckpoint("model{epoch:02d}.h5", monitor='val_acc', save_best_only=True, mode='max')])
Run Code Online (Sandbox Code Playgroud)
上下文和话语是具有例如(100,15,300)形状的numpy数组。第一个LSTM的input_shape应为(15,300)。
我不知道发生了什么,但突然在训练过程中出现了负损失和val_loss。过去通常为正(大约0.18左右)。
训练90个样本,验证10个样本
时代1/5000 90/90 [==============================]-5s 52ms / step-损失:-0.4729- acc:0.0059-val_loss:-0.4405-val_acc:0.0133
时代2/5000 90/90 [==============================]-2s 18ms / step-损失:-0.5091- acc:0.0089-val_loss:-0.4658-val_acc:0.0133
时代3/5000 90/90 [==============================]-2s 18ms / step-损失:-0.5204- acc:0.0170-val_loss:-0.4829-val_acc:0.0200
时代4/5000 90/90 [==============================]-2s 20ms / step-损失:-0.5296- acc:0.0244-val_loss:-0.4949-val_acc:0.0333
时代5/5000 90/90 [==============================]-2s 20ms / step-损失:-0.5370- acc:0.0422-val_loss:-0.5021-val_acc:0.0400
这是什么意思?可能的原因是什么?
谢谢你的回复