Tensorflow 预测输出的 class

Tensorflow predict the class of output

我已经用 keras 尝试过这个例子,但没有用 LSTM。我的模型在 Tensorflow 中使用 LSTM,我愿意以 类 的形式预测输出作为 keras 模型,因此使用 predict_classes.
我正在尝试的 Tensorflow 模型是这样的:

seq_len=10
n_steps = seq_len-1 
n_inputs = x_train.shape[2]
n_neurons = 50
n_outputs = y_train.shape[1]
n_layers = 2
learning_rate = 0.0001
batch_size =100
n_epochs = 1000
train_set_size = x_train.shape[0]
test_set_size = x_test.shape[0]

tf.reset_default_graph()
X = tf.placeholder(tf.float32, [None, n_steps, n_inputs])
y = tf.placeholder(tf.float32, [None, n_outputs])
layers = [tf.contrib.rnn.LSTMCell(num_units=n_neurons,activation=tf.nn.sigmoid, use_peepholes = True)  for layer in range(n_layers)]

multi_layer_cell = tf.contrib.rnn.MultiRNNCell(layers)
rnn_outputs, states = tf.nn.dynamic_rnn(multi_layer_cell, X, dtype=tf.float32)

stacked_rnn_outputs = tf.reshape(rnn_outputs, [-1, n_neurons]) 
stacked_outputs = tf.layers.dense(stacked_rnn_outputs, n_outputs)
outputs = tf.reshape(stacked_outputs, [-1, n_steps, n_outputs])
outputs = outputs[:,n_steps-1,:]                                       
loss = tf.reduce_mean(tf.square(outputs - y)) 
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate) 
training_op = optimizer.minimize(loss)

我正在用 sklearn LabelEncoder 编码为:

encoder_train = LabelEncoder()
encoder_train.fit(y_train)
encoded_Y_train = encoder_train.transform(y_train)
y_train = np_utils.to_categorical(encoded_Y_train)

数据被转换为二进制格式的稀疏矩阵。
当我试图预测输出时,我得到以下信息:

actual==>  [[0. 0. 1.]
 [1. 0. 0.]
 [1. 0. 0.]
 [0. 0. 1.]
 [1. 0. 0.]
 [1. 0. 0.]
 [1. 0. 0.]
 [0. 1. 0.]
 [0. 1. 0.]] 
predicted==>  [[0.3112209  0.3690182  0.31357136]
 [0.31085992 0.36959863 0.31448898]
 [0.31073445 0.3703295  0.31469804]
 [0.31177694 0.37011752 0.3145326 ]
 [0.31220382 0.3692756  0.31515726]
 [0.31232828 0.36947766 0.3149037 ]
 [0.31190437 0.36756667 0.31323162]
 [0.31339088 0.36542615 0.310322  ]
 [0.31598282 0.36328828 0.30711085]] 

基于已完成的编码,我对标签的期望是什么。因此作为 Keras 模型。请参阅以下内容:

predictions = model.predict_classes(X_test, verbose=True)
print("REAL VALUES:",reverse_category(Y_test,axis=1))
print("PRED VALUES:",predictions)
print("REAL COLORS:")
print(encoder.inverse_transform(reverse_category(Y_test,axis=1)))
print("PREDICTED COLORS:")
print(encoder.inverse_transform(predictions))

输出类似于以下内容:

REAL VALUES: [1 1 1 ... 1 2 1]
PRED VALUES: [2 1 1 ... 1 2 2]
REAL COLORS:
['ball' 'ball' 'ball' ... 'ball' 'bat' 'ball']
PREDICTED COLORS:
['bat' 'ball' 'ball' ... 'ball' 'bat' 'bat']

请告诉我我可以在 tensorflow 模型中做什么,这将使我获得与完成编码相关的结果。
我正在使用 Tensorflow 1.12.0 and Windows 10

您正在尝试将预测的 class 概率映射回 class 标签。输出预测列表中的每一行都包含三个预测的 class 概率。使用np.argmax得到预测概率最高的,以便映射到预测的class标签:

import numpy as np

predictions = [[0.3112209,  0.3690182,  0.31357136],
 [0.31085992, 0.36959863, 0.31448898],
 [0.31073445, 0.3703295, 0.31469804],
 [0.31177694, 0.37011752, 0.3145326 ],
 [0.31220382, 0.3692756, 0.31515726],
 [0.31232828, 0.36947766, 0.3149037 ],
 [0.31190437, 0.36756667, 0.31323162],
 [0.31339088, 0.36542615, 0.310322  ],
 [0.31598282, 0.36328828, 0.30711085]] 

np.argmax(predictions, axis=1) 

给出:

array([1, 1, 1, 1, 1, 1, 1, 1, 1])

在这种情况下,class 1 被预测了 9 次。

如评论中所述:这正是 Keras 在幕后所做的,您将看到 in the source code