如何在张量流中创建用于分类的混淆矩阵
how to create confusion matrix for classification in tensorflow
我有一个 CNN 模型,它有 4 个输出节点,我正在尝试计算混淆矩阵,以便我可以知道每个 class 的准确度。我能够计算整体准确度。
在 link 中,Igor Valantic 给出了一个可以计算混淆矩阵变量的函数。
它在 correct_prediction = tf.nn.in_top_k(logits, labels, 1, name="correct_answers")
处给我一个错误,错误是 TypeError: DataType float32 for attr 'T' not in list of allowed values: int32, int64
我已经尝试在提到的函数 def evaluation(logits, labels)
中将 logits 类型转换为 int32,它在计算 correct_prediction = ...
时给出了另一个错误作为 TypeError:Input 'predictions' of 'InTopK' Op has type int32 that does not match expected type of float32
如何计算这个混淆矩阵?
sess = tf.Session()
model = dimensions() # CNN input weights are calculated
data_train, data_test, label_train, label_test = load_data(files_test2,folder)
data_train, data_test, = reshapedata(data_train, data_test, model)
# input output placeholders
x = tf.placeholder(tf.float32, [model.BATCH_SIZE, model.input_width,model.input_height,model.input_depth]) # last column = 1
y_ = tf.placeholder(tf.float32, [model.BATCH_SIZE, model.No_Classes])
p_keep_conv = tf.placeholder("float")
#
y = mycnn(x,model, p_keep_conv)
# loss
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(y, y_))
# train step
train_step = tf.train.AdamOptimizer(1e-3).minimize(cost)
correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
true_positives, false_positives, true_negatives, false_negatives = evaluation(y,y_)
lossfun = np.zeros(STEPS)
sess.run(tf.global_variables_initializer())
for i in range(STEPS):
image_batch, label_batch = batchdata(data_train, label_train, model.BATCH_SIZE)
epoch_loss = 0
for j in range(model.BATCH_SIZE):
sess.run(train_step, feed_dict={x: image_batch, y_: label_batch, p_keep_conv:1.0})
c = sess.run( cost, feed_dict={x: image_batch, y_: label_batch, p_keep_conv: 1.0})
epoch_loss += c
lossfun[i] = epoch_loss
print('Epoch',i,'completed out of',STEPS,'loss:',epoch_loss )
TP,FP,TN,FN = sess.run([true_positives, false_positives, true_negatives, false_negatives], feed_dict={x: image_batch, y_: label_batch, p_keep_conv:1.0})
这是我的代码片段
你可以简单地使用 Tensorflow 的 confusion matrix。我假设 y
是你的预测,你可能有也可能没有 num_classes
(这是可选的)
y_ = placeholder_for_labels # for eg: [1, 2, 4]
y = mycnn(...) # for eg: [2, 2, 4]
confusion = tf.confusion_matrix(labels=y_, predictions=y, num_classes=num_classes)
如果你print(confusion)
,你会得到
[[0 0 0 0 0]
[0 0 1 0 0]
[0 0 1 0 0]
[0 0 0 0 0]
[0 0 0 0 1]]
如果 print(confusion)
不打印混淆矩阵,则使用 print(confusion.eval(session=sess))
。这里 sess
是您的 TensorFlow 会话的名称。
import tensorflow as tf
y = [1, 2, 4]
y_ = [2, 2, 4]
con = tf.confusion_matrix(labels=y_, predictions=y )
sess = tf.Session()
with sess.as_default():
print(sess.run(con))
输出为:
[[0 0 0 0 0]
[0 0 0 0 0]
[0 1 1 0 0]
[0 0 0 0 0]
[0 0 0 0 1]]
我有一个 CNN 模型,它有 4 个输出节点,我正在尝试计算混淆矩阵,以便我可以知道每个 class 的准确度。我能够计算整体准确度。
在 link correct_prediction = tf.nn.in_top_k(logits, labels, 1, name="correct_answers")
处给我一个错误,错误是 TypeError: DataType float32 for attr 'T' not in list of allowed values: int32, int64
我已经尝试在提到的函数 def evaluation(logits, labels)
中将 logits 类型转换为 int32,它在计算 correct_prediction = ...
时给出了另一个错误作为 TypeError:Input 'predictions' of 'InTopK' Op has type int32 that does not match expected type of float32
如何计算这个混淆矩阵?
sess = tf.Session()
model = dimensions() # CNN input weights are calculated
data_train, data_test, label_train, label_test = load_data(files_test2,folder)
data_train, data_test, = reshapedata(data_train, data_test, model)
# input output placeholders
x = tf.placeholder(tf.float32, [model.BATCH_SIZE, model.input_width,model.input_height,model.input_depth]) # last column = 1
y_ = tf.placeholder(tf.float32, [model.BATCH_SIZE, model.No_Classes])
p_keep_conv = tf.placeholder("float")
#
y = mycnn(x,model, p_keep_conv)
# loss
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(y, y_))
# train step
train_step = tf.train.AdamOptimizer(1e-3).minimize(cost)
correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
true_positives, false_positives, true_negatives, false_negatives = evaluation(y,y_)
lossfun = np.zeros(STEPS)
sess.run(tf.global_variables_initializer())
for i in range(STEPS):
image_batch, label_batch = batchdata(data_train, label_train, model.BATCH_SIZE)
epoch_loss = 0
for j in range(model.BATCH_SIZE):
sess.run(train_step, feed_dict={x: image_batch, y_: label_batch, p_keep_conv:1.0})
c = sess.run( cost, feed_dict={x: image_batch, y_: label_batch, p_keep_conv: 1.0})
epoch_loss += c
lossfun[i] = epoch_loss
print('Epoch',i,'completed out of',STEPS,'loss:',epoch_loss )
TP,FP,TN,FN = sess.run([true_positives, false_positives, true_negatives, false_negatives], feed_dict={x: image_batch, y_: label_batch, p_keep_conv:1.0})
这是我的代码片段
你可以简单地使用 Tensorflow 的 confusion matrix。我假设 y
是你的预测,你可能有也可能没有 num_classes
(这是可选的)
y_ = placeholder_for_labels # for eg: [1, 2, 4]
y = mycnn(...) # for eg: [2, 2, 4]
confusion = tf.confusion_matrix(labels=y_, predictions=y, num_classes=num_classes)
如果你print(confusion)
,你会得到
[[0 0 0 0 0]
[0 0 1 0 0]
[0 0 1 0 0]
[0 0 0 0 0]
[0 0 0 0 1]]
如果 print(confusion)
不打印混淆矩阵,则使用 print(confusion.eval(session=sess))
。这里 sess
是您的 TensorFlow 会话的名称。
import tensorflow as tf
y = [1, 2, 4]
y_ = [2, 2, 4]
con = tf.confusion_matrix(labels=y_, predictions=y )
sess = tf.Session()
with sess.as_default():
print(sess.run(con))
输出为:
[[0 0 0 0 0]
[0 0 0 0 0]
[0 1 1 0 0]
[0 0 0 0 0]
[0 0 0 0 1]]