使用keras的神经网络的准确性非常低,并且验证准确性为0.0000e + 00

R.N*_*air 3 neural-network python-3.x keras

下面是我正在使用的代码。请让我知道为什么我的验证和培训准确性这么低?验证精度仅为0.0000e + 00,并且培训精度约为37%。可能出了什么问题?我的训练集有10500行和172列我的测试集有3150行和172列我的第一列是响应(类),因此我仅将其用作Y,其余列用作X。我的响应为3类:默认,低频和射频

from __future__ import print_function
import numpy as np
import pandas
from keras.models import Sequential
from keras.layers.core import Dense, Activation
from keras.optimizers import SGD
from keras.utils import np_utils
from sklearn.preprocessing import LabelEncoder
np.random.seed(1671)
NB_EPOCH = 5
BATCH_SIZE = 128
VERBOSE = 1
NB_CLASSES = 3
OPTIMIZER = SGD()
N_HIDDEN = 128
VALIDATION_SPLIT=0.1
RESHAPED = 171
dataframe_train = pandas.read_csv("TrainingEdgesToAction.csv", header=None)
dataset_train = dataframe_train.values
X_train = dataset_train[1:,1:172].astype(float)
#X_train = dataset_train[1:,0:172]
Y_train = dataset_train[1:,0]

dataframe_test = pandas.read_csv("TestingEdgesToAction.csv", header=None)
dataset_test = dataframe_test.values
X_test = dataset_test[1:,1:172].astype(float)
#X_test = dataset_test[1:,0:172]
Y_test = dataset_test[1:,0]

X_train = X_train.reshape(10500,RESHAPED)
X_test = X_test.reshape(3150,RESHAPED)
X_train /= 255
X_test /= 255
print(X_train.shape[0],'train samples')
print(X_test.shape[0],'test samples')

encoder = LabelEncoder()
encoder.fit(Y_train)
encoded_Y_train = encoder.transform(Y_train)
# convert integers to dummy variables (i.e. one hot encoded)
dummy_y_train = np_utils.to_categorical(encoded_Y_train)
print(dummy_y_train)

encoder = LabelEncoder()
encoder.fit(Y_test)
encoded_Y_test = encoder.transform(Y_test)
# convert integers to dummy variables (i.e. one hot encoded)
dummy_y_test = np_utils.to_categorical(encoded_Y_test)
print(dummy_y_test)

#Y_train = np_utils.to_categorical(Y_train,NB_CLASSES)
#Y_test = np_utils.to_categorical(Y_test, NB_CLASSES)

model = Sequential()
model.add(Dense(N_HIDDEN,input_shape=(RESHAPED,)))
model.add(Activation('relu'))
model.add(Dense(N_HIDDEN))
model.add(Activation('relu'))
model.add(Dense(NB_CLASSES))
model.add(Activation('softmax'))
model.summary()
model.compile(loss='categorical_crossentropy',optimizer=OPTIMIZER,metrics=
['accuracy'])
history = model.fit(X_train,dummy_y_train,batch_size=BATCH_SIZE,epochs=NB_EPOCH,shuffle=True,verbose=VERBOSE,validation_split=VALIDATION_SPLIT)
score = model.evaluate(X_test,dummy_y_test,verbose=VERBOSE)

print("\nTest score:",score[0])
print("Test accuracy:",score[1])

10500 train samples
3150 test samples
[[ 1.  0.  0.]
[ 1.  0.  0.]
[ 1.  0.  0.]
..., 
[ 0.  0.  1.]
[ 0.  0.  1.]
[ 0.  0.  1.]]
[[ 1.  0.  0.]
[ 1.  0.  0.]
[ 1.  0.  0.]
..., 
[ 0.  0.  1.]
[ 0.  0.  1.]
[ 0.  0.  1.]]
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
dense_49 (Dense)             (None, 128)               22016     
_________________________________________________________________
activation_49 (Activation)   (None, 128)               0         
_________________________________________________________________
dense_50 (Dense)             (None, 128)               16512     
_________________________________________________________________
activation_50 (Activation)   (None, 128)               0         
_________________________________________________________________
dense_51 (Dense)             (None, 3)                 387       
_________________________________________________________________
activation_51 (Activation)   (None, 3)                 0         
=================================================================
Total params: 38,915
Trainable params: 38,915
Non-trainable params: 0
_________________________________________________________________
Train on 9450 samples, validate on 1050 samples
Epoch 1/5
9450/9450 [==============================] - 2s - loss: 1.0944 - acc: 0.3618 
- val_loss: 1.1809 - val_acc: 0.0000e+00
Epoch 2/5
9450/9450 [==============================] - 1s - loss: 1.0895 - acc: 0.3704 
- val_loss: 1.2344 - val_acc: 0.0000e+00
Epoch 3/5
9450/9450 [==============================] - 0s - loss: 1.0874 - acc: 0.3704 
- val_loss: 1.2706 - val_acc: 0.0000e+00
Epoch 4/5
9450/9450 [==============================] - 0s - loss: 1.0864 - acc: 0.3878 
- val_loss: 1.2955 - val_acc: 0.0000e+00
Epoch 5/5
9450/9450 [==============================] - 0s - loss: 1.0860 - acc: 0.3761 
- val_loss: 1.3119 - val_acc: 0.0000e+00
2848/3150 [==========================>...] - ETA: 0s
Test score: 1.10844093784
Test accuracy: 0.333333333333
Run Code Online (Sandbox Code Playgroud)

Pad*_*ddy 7

我决定总结一下我们的“聊天”。

因此,如果您的测试准确性低(大约0.1%),该怎么办,以下是一些一般性建议:

  • 根据我的经验,尝试不同的优化器,Adam是一个很好的起点。
  • 尝试不同的激活功能;我建议您从“ relu”开始,并尝试“ selu”和“ elu”。
  • 添加正则化。辍学和BatchNormalization可以提高您的测试准确性。
  • 给您的网络一些时间,对其进行培训。
  • 玩弄超参数,例如层数,批处理大小,时期,学习率等等。
  • 最后,在将数据提供给NN之前,请务必对其进行标准化