Tensorflow:可视化 MNIST 数据集上线性分类器的训练权重

Tensorflow: Visualizing trained weights for linear classifier on MNIST dataset

我在 MNIST 数据集上训练了一个线性分类器,准确率为 92%。然后我固定权重并优化输入图像,使 8 的 softmax 概率最大化。但是 softmax 损失并没有降低到 2.302 (-log(1/10)) 以下,这意味着我的训练没有用。我做错了什么?

训练权重的代码:

import tensorflow as tf
import numpy as np
from tensorflow.examples.tutorials.mnist import input_data

mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
trX, trY, teX, teY = mnist.train.images, mnist.train.labels,       
mnist.test.images, mnist.test.labels

X = tf.placeholder("float", [None, 784])
Y = tf.placeholder("float", [None, 10])

w = tf.Variable(tf.random_normal([784, 10], stddev=0.01))
b = tf.Variable(tf.zeros([10]))

o = tf.nn.sigmoid(tf.matmul(X, w)+b)

cost= tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=o, labels=Y))
train_op = tf.train.RMSPropOptimizer(0.001, 0.9).minimize(cost)
predict_op = tf.argmax(o, 1)

sess=tf.Session()
sess.run(tf.global_variables_initializer())
for i in range(100):
  for start, end in zip(range(0, len(trX), 256), range(256, len(trX)+1, 256)):
      sess.run(train_op, feed_dict={X: trX[start:end], Y: trY[start:end]})
  print(i, np.mean(np.argmax(teY, axis=1) == sess.run(predict_op, feed_dict={X: teX})))

固定权重图像训练代码:

#Copy trained weights into W,B and pass them as placeholders to new model
W=sess.run(w)
B=sess.run(b)

X=tf.Variable(tf.random_normal([1, 784], stddev=0.01))
Y=tf.constant([0, 0, 0, 0, 0, 0, 0, 0, 1, 0])

w=tf.placeholder("float")
b=tf.placeholder("float")

o = tf.nn.sigmoid(tf.matmul(X, w)+b)

cost= tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=o, labels=Y))
train_op = tf.train.RMSPropOptimizer(0.001, 0.9).minimize(cost)
predict_op = tf.argmax(o, 1)

sess.run(tf.global_variables_initializer())
for i in range(1000):
  sess.run(train_op, feed_dict={w:W, b:B})
  if i%50==0:
    sess.run(cost, feed_dict={w:W, b:B})
    print(i, sess.run(predict_op, feed_dict={w:W, b:B}))

你不应该在你的网络输出上调用 tf.sigmoidsoftmax_cross_entropy_with_logits 假设您的输入是对数,即不受约束的实数。使用

o = tf.matmul(X, w)+b

将您的准确率提高到 92.8%。

通过此修改,您的第二次训练有效。尽管生成的图像毫无吸引力,但成本达到了 0。