Tensorflow预测输出类别

Jaf*_*son 5 python python-3.x categorical-data keras tensorflow

我用keras尝试了这个例子但是没有使用LSTM.我的模型是Tensorflow中的LSTM,我愿意以类的形式预测输出作为keras模型predict_classes.
我正在尝试的Tensorflow模型是这样的:

seq_len=10
n_steps = seq_len-1 
n_inputs = x_train.shape[2]
n_neurons = 50
n_outputs = y_train.shape[1]
n_layers = 2
learning_rate = 0.0001
batch_size =100
n_epochs = 1000
train_set_size = x_train.shape[0]
test_set_size = x_test.shape[0]

tf.reset_default_graph()
X = tf.placeholder(tf.float32, [None, n_steps, n_inputs])
y = tf.placeholder(tf.float32, [None, n_outputs])
layers = [tf.contrib.rnn.LSTMCell(num_units=n_neurons,activation=tf.nn.sigmoid, use_peepholes = True)  for layer in range(n_layers)]

multi_layer_cell = tf.contrib.rnn.MultiRNNCell(layers)
rnn_outputs, states = tf.nn.dynamic_rnn(multi_layer_cell, X, dtype=tf.float32)

stacked_rnn_outputs = tf.reshape(rnn_outputs, [-1, n_neurons]) 
stacked_outputs = tf.layers.dense(stacked_rnn_outputs, n_outputs)
outputs = tf.reshape(stacked_outputs, [-1, n_steps, n_outputs])
outputs = outputs[:,n_steps-1,:]                                       
loss = tf.reduce_mean(tf.square(outputs - y)) 
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate) 
training_op = optimizer.minimize(loss)
Run Code Online (Sandbox Code Playgroud)

我使用sklearn LabelEncoder编码为:

encoder_train = LabelEncoder()
encoder_train.fit(y_train)
encoded_Y_train = encoder_train.transform(y_train)
y_train = np_utils.to_categorical(encoded_Y_train)
Run Code Online (Sandbox Code Playgroud)

数据以二进制格式转换为稀疏矩阵.
当我试图预测输出时,我得到以下结果:

actual==>  [[0. 0. 1.]
 [1. 0. 0.]
 [1. 0. 0.]
 [0. 0. 1.]
 [1. 0. 0.]
 [1. 0. 0.]
 [1. 0. 0.]
 [0. 1. 0.]
 [0. 1. 0.]] 
predicted==>  [[0.3112209  0.3690182  0.31357136]
 [0.31085992 0.36959863 0.31448898]
 [0.31073445 0.3703295  0.31469804]
 [0.31177694 0.37011752 0.3145326 ]
 [0.31220382 0.3692756  0.31515726]
 [0.31232828 0.36947766 0.3149037 ]
 [0.31190437 0.36756667 0.31323162]
 [0.31339088 0.36542615 0.310322  ]
 [0.31598282 0.36328828 0.30711085]] 
Run Code Online (Sandbox Code Playgroud)

基于编码完成我对标签的期望.因此,作为Keras模型.请参阅以下内容:

predictions = model.predict_classes(X_test, verbose=True)
print("REAL VALUES:",reverse_category(Y_test,axis=1))
print("PRED VALUES:",predictions)
print("REAL COLORS:")
print(encoder.inverse_transform(reverse_category(Y_test,axis=1)))
print("PREDICTED COLORS:")
print(encoder.inverse_transform(predictions))
Run Code Online (Sandbox Code Playgroud)

输出类似于以下内容:

REAL VALUES: [1 1 1 ... 1 2 1]
PRED VALUES: [2 1 1 ... 1 2 2]
REAL COLORS:
['ball' 'ball' 'ball' ... 'ball' 'bat' 'ball']
PREDICTED COLORS:
['bat' 'ball' 'ball' ... 'ball' 'bat' 'bat']
Run Code Online (Sandbox Code Playgroud)

请告诉我,在张量流模型中我可以做些什么,这将得到关于编码完成的结果.
我在用Tensorflow 1.12.0 and Windows 10

sdc*_*cbr 5

我认为你要做的就是将预测的类概率映射回类标签.输出预测列表中的每一行包含三个类的三个预测类概率; 你可以argmax沿着每一行来映射到实际的预测类(即具有最高预测概率的类):

import numpy as np

predictions = [[0.3112209,  0.3690182,  0.31357136],
 [0.31085992, 0.36959863, 0.31448898],
 [0.31073445, 0.3703295, 0.31469804],
 [0.31177694, 0.37011752, 0.3145326 ],
 [0.31220382, 0.3692756, 0.31515726],
 [0.31232828, 0.36947766, 0.3149037 ],
 [0.31190437, 0.36756667, 0.31323162],
 [0.31339088, 0.36542615, 0.310322  ],
 [0.31598282, 0.36328828, 0.30711085]] 

np.argmax(predictions, axis=1) 
Run Code Online (Sandbox Code Playgroud)

得到:

array([1, 1, 1, 1, 1, 1, 1, 1, 1])
Run Code Online (Sandbox Code Playgroud)

在这种情况下,1级被预测9次.

正如评论中所指出的那样:这正是Keras所做的,正如您将在源代码中看到的那样.