张量流中的快速 softmax 回归实现
Fast softmax regression implementation in tensorflow
我正在尝试在tensorflow中实现softmax回归模型,以便与其他主流深度学习框架进行benchmark。官方文档代码慢是因为tensorflow中的feed_dict issue。我正在尝试将数据作为张量流常量提供,但我不知道最有效的方法。现在我只使用单个批次作为常量并通过该批次进行训练。制作该代码的小批量解决方案的有效解决方案是什么?这是我的代码:
from tensorflow.examples.tutorials.mnist import input_data
import tensorflow as tf
import numpy as np
mnist = input_data.read_data_sets(FLAGS.data_dir, one_hot=True)
batch_xs, batch_ys = mnist.train.next_batch(100)
x = tf.constant(batch_xs, name="x")
W = tf.Variable(0.1*tf.random_normal([784, 10]))
b = tf.Variable(tf.zeros([10]))
logits = tf.matmul(x, W) + b
batch_y = batch_ys.astype(np.float32)
y_ = tf.constant(batch_y, name="y_")
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits, y_))
train_step = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
....
# Minitbatch is never updated during that for loop
for i in range(5500):
sess.run(train_step)
如下
from tensorflow.examples.tutorials.mnist import input_data
import tensorflow as tf
import numpy as np
batch_size = 32 #any size you want
mnist = input_data.read_data_sets(FLAGS.data_dir, one_hot=True)
x = tf.placeholder(tf.float32, shape = [None, 784])
y = tf.placeholder(tf.float32, shape = [None, 10])
W = tf.Variable(0.1*tf.random_normal([784, 10]))
b = tf.Variable(tf.zeros([10]))
logits = tf.matmul(x, W) + b
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits, y))
train_step = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
....
# Minitbatch is never updated during that for loop
for i in range(1000):
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
l, _ = sess.run([loss, train_step], feed_dict = {x: batch_x, y: batch_ys})
print l #loss for every minibatch
类似 [None, 784] 的形状允许您输入任何值的形状 [?, 784]。
我还没有测试过这段代码,但我希望它能起作用。
我正在尝试在tensorflow中实现softmax回归模型,以便与其他主流深度学习框架进行benchmark。官方文档代码慢是因为tensorflow中的feed_dict issue。我正在尝试将数据作为张量流常量提供,但我不知道最有效的方法。现在我只使用单个批次作为常量并通过该批次进行训练。制作该代码的小批量解决方案的有效解决方案是什么?这是我的代码:
from tensorflow.examples.tutorials.mnist import input_data
import tensorflow as tf
import numpy as np
mnist = input_data.read_data_sets(FLAGS.data_dir, one_hot=True)
batch_xs, batch_ys = mnist.train.next_batch(100)
x = tf.constant(batch_xs, name="x")
W = tf.Variable(0.1*tf.random_normal([784, 10]))
b = tf.Variable(tf.zeros([10]))
logits = tf.matmul(x, W) + b
batch_y = batch_ys.astype(np.float32)
y_ = tf.constant(batch_y, name="y_")
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits, y_))
train_step = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
....
# Minitbatch is never updated during that for loop
for i in range(5500):
sess.run(train_step)
如下
from tensorflow.examples.tutorials.mnist import input_data
import tensorflow as tf
import numpy as np
batch_size = 32 #any size you want
mnist = input_data.read_data_sets(FLAGS.data_dir, one_hot=True)
x = tf.placeholder(tf.float32, shape = [None, 784])
y = tf.placeholder(tf.float32, shape = [None, 10])
W = tf.Variable(0.1*tf.random_normal([784, 10]))
b = tf.Variable(tf.zeros([10]))
logits = tf.matmul(x, W) + b
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits, y))
train_step = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
....
# Minitbatch is never updated during that for loop
for i in range(1000):
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
l, _ = sess.run([loss, train_step], feed_dict = {x: batch_x, y: batch_ys})
print l #loss for every minibatch
类似 [None, 784] 的形状允许您输入任何值的形状 [?, 784]。
我还没有测试过这段代码,但我希望它能起作用。